Skip to content

Install Serverless and Cloud Events on an existing cluster using GitOps

Note

This is a work in progress, come back for updates.

Overview

This guide will illustrate the steps to install Serverless and Cloud Events in an existing cluster using ArgoCD provided by Red Hat OpenShift GitOps operator.

Additionally it shows the steps to configure Red Hat Openshift Pipelines operator based on Tekton to send cloud events and steps to configure a "slack-notification" app to receive those events.

Install Serverless and Cloud Eventing

Pre-requisites

The following is required before proceeding to the next section.

  • Provision an OpenShift cluster.
  • Login to the cluster via the oc cli.

Installation Steps

  1. Fork the multi-tenancy-gitops repository and clone your fork.

    git clone git@github.com:{gitid}/multi-tenancy-gitops.git
    
  2. Change to the kustomize branch of your fork.

    cd multi-tenancy-gitops
    git checkout kustomize
    
  3. Install the Red Hat OpenShift GitOps operator.

    • For Openshift 4.6
      oc apply -f setup/ocp46/
      
    • For Openshift 4.7
      oc apply -f setup/ocp47/
      
  4. Update the files to reference your forked repository. Run the set-git-source.sh script that will replace cloud-native-toolkit Github Org references with your {gitid}.

    export GIT_USER={gitid}
    ./scripts/set-git-source.sh
    
    Update git org

  5. Push the changes to your forked repository.

    git add .
    git commit -m "push repo gitid changes"
    git push
    
    Update git add, commit, push

  6. Their are different deployment options provided in folders in the repository. In this guide we will use the default single-server deployment. The other options are located in the others folder.

    ./0-bootstrap
    └── argocd
        ├── bootstrap.yaml
        ├── others
        │   ├── 1-shared-cluster
        │   ├── 2-isolated-cluster
        │   └── 3-multi-cluster
        └── single-cluster
    
  7. If you choose to use a different deployment option edit the 0-bootstrap/argocd/bootstrap.yaml and modify the spec.source.path and update the metadata.name accordingly. For example to use the 1-shared-cluster change the path to 0-bootstrap/argocd/others/1-shared-cluster.

    apiVersion: argoproj.io/v1alpha1
    kind: Application
    metadata:
      name: bootstrap-1-shared-cluster
      namespace: openshift-gitops
    spec:
      destination:
        namespace: openshift-gitops
        server: https://kubernetes.default.svc
      project: default
      source:
        path: 0-bootstrap/argocd/others/1-shared-cluster
        repoURL: https://github.com/lsteck/multi-tenancy-gitops.git
        targetRevision: kustomize
      syncPolicy:
        automated:
        prune: true
        selfHeal: true
    
  8. In this guide we will use the unchanged 0-bootstrap/argocd/bootstrap.yaml which uses the single-cluster deployment.

    apiVersion: argoproj.io/v1alpha1
    kind: Application
    metadata:
      name: bootstrap-single-cluster
      namespace: openshift-gitops
    spec:
      destination:
        namespace: openshift-gitops
        server: https://kubernetes.default.svc
      project: default
      source:
        path: 0-bootstrap/argocd/single-cluster
        repoURL: https://github.com/lsteck/multi-tenancy-gitops.git
        targetRevision: kustomize
      syncPolicy:
        automated:
        prune: true
        selfHeal: true
    
  9. Under the cluster's folder there are 1-infra, 2-services and 3-apps folders which define what infrastructure, services and app resources are to be deployed respectively.

    ./0-bootstrap
    └── argocd
        ├── bootstrap.yaml
        ├── others
        └── single-cluster
            ├── 1-infra
            ├── 2-services
            ├── 3-apps
            ├── bootstrap.yaml
            └── kustomization.yaml
    
  10. Open the kustomization.yaml file under the 1-infra folder

    ./0-bootstrap
    └── argocd
        ├── bootstrap.yaml
        ├── others
        └── single-cluster
            └── 1-infra
                ├── 1-infra.yaml
                ├── argocd
                └── kustomization.yaml
    
  11. Uncomment the lines under the # Openshift Serverless/Eventing section to deploy those resources.

    # Openshift Serverless/Eventing
    - argocd/namespace-openshift-serverless.yaml
    - argocd/namespace-knative-serving.yaml
    - argocd/namespace-knative-eventing.yaml
    
  12. Open the kustomization.yaml file under the 2-services folder

    ./0-bootstrap
    └── argocd
        ├── bootstrap.yaml
        ├── others
        └── single-cluster
            ├── 1-infra
            └── 2-services
                ├── 1-infra.yaml
                ├── argocd
                └── kustomization.yaml
    
  13. Uncomment the lines under the # Openshift Serverless/Eventing section to deploy those resources.

    # Openshift Serverless/Eventing
    - argocd/operators/openshift-serverless.yaml
    - argocd/instances/knative-eventing-instance.yaml
    
  14. Installing the Serverless and Eventing doesn't require any resources under the 3-apps folder so the kustomization.yaml in that folder doesn't need to be changed.

  15. Push the changes to your forked repository.

    git add .
    git commit -m "push serverless and eventing"
    git push
    
  16. Create the bootstrap ArgoCD application.

    oc apply -f 0-bootstrap/argocd/bootstrap.yaml -n openshift-gitops
    
  17. From the OpenShift console launch ArgoCD by clicking the ArgoCD link from the Applications (9 squares) menu

    Launch ArgoCD

  18. The ArcoCD user id is admin and the password can be found in the argocd-cluster-cluster secret in the openshift-gitops project namespace. You can extract the secret with the command

    oc extract secret/argocd-cluster-cluster --to=- -n openshift-gitops
    

  19. On the ArgoCD UI you can see the newly created bootstrap application.

    ArgoCD bootstrap app

  20. After several minutes you will see all the other ArgoCD applications with a status Healthy and Synced. The status will progress from Missing, OutOfSync, Syncing. If you see a status of Sync failed there were errors.

    ArgoCD applications

  21. You can check that the Red Hat OpenShift Serverless operator that provides serverless and eventing capabilities has been installed from the Installed Operators page on the console.

    OpenShift Serverless operator

Install Slack Notification app and configure Tekton to emit Cloud Events

Note

Both installation steps could be performed at the same time. They were broken out in this guide to illustrate how you could install Serverless and Eventing without installing the Slack Notification app.

Pre-requisites

The following are required before proceeding.

Installation Steps

  1. Open the kustomization.yaml file under the 1-infra folder

    ./0-bootstrap
    └── argocd
        ├── bootstrap.yaml
        ├── others
        └── single-cluster
            └── 1-infra
                ├── 1-infra.yaml
                ├── argocd
                └── kustomization.yaml
    
  2. Uncomment the lines under the # Slack Notifications section to deploy those resources.

    # Slack Notifications
    - argocd/namespace-slack-notifications.yaml
    
  3. Push the changes to your forked repository.

    git add .
    git commit -m "push slack notifications namespace"
    git push
    
  4. After a few minutes you should see see an ArgoCD namespace-slack-notifications app. This app creates the slack-notifications project namespace where we will deploy the slack notification app. Slack Notifications Namespace

  5. Before we deploy the app we need to create a secret to store the slack incoming webhook you created as a pre-requisite. This secret needs to be in the slack-notifications project namespace. You can generate an encrypted secret containing the slack notification webhook using the Sealed Secret Operator or you can manually create the secret as follows NOTE: replace <webhook-url> with your slack webhook url .

    WEBHOOK=<webhook-url>
    
    oc project slack-notifications
    
    oc create secret generic slack-secret \
     --from-literal=SLACK_URL=${WEBHOOK}
    
  6. Installing the Slack Notification app doesn't require any resources under the 2-services folder so the kustomization.yaml in that folder doesn't need to be changed.

  7. Open the kustomization.yaml file under the 3-apps folder

    ./0-bootstrap
    └── argocd
        ├── bootstrap.yaml
        ├── others
        └── single-cluster
            ├── 1-infra
            ├── 2-services
            └── 3-apps
                ├── 3-apps.yaml
                ├── argocd
                └── kustomization.yaml
    
  8. Uncomment the lines under the # Slack Notifications section to deploy those resources.

    # Slack Notifications
    - argocd/slack-notifications/slack-notifications.yaml
    - argocd/shared/config/openshift-pipelines/configmap/openshift-pipelines-config.yaml
    
  9. If you changed the slack-secret name or key you need to update the secret name and key value in the slack-notification.yaml

    1. Open slack-notifications.yaml

      ./0-bootstrap
      └── argocd
          └── single-cluster
              └── 3-apps
                 └── argocd
                     └── slack-notifications
                         └── slack-notifications.yaml
      

    2. Modify the name and key to match your secret name and the key name.

        secret:
          # provide name of the secret that contains slack url
          name: slack-secret
          # provide key of the secret that contains slack url
          key: SLACK_URL
      

  10. Push the changes to your forked repository.

    git add .
    git commit -m "push slack notifications app"
    git push
    
  11. After a few minutes you will see an apps-slack-notifications ArgoCD app and a openshift-pipelines-config ArgoCD app. apps Slack Notifications

  12. On the OpenShift Console you can see the slack notifications app deployment in the slack-notifications project namespace. Slack Notifications App Deployment

Now when you run Pipelines you will receive Slack Notifications.