Skip to content

osodevops/confluent-openshift-gitops-demo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Confluent on OpenShift GitOps Demo

README Header

A end to end demo of setting up Confluent Operator on OpenShift 4


Usage

Overview

Complete demo Confluent environment running on OpenShift targeting declarative Kafka, Zookeeper and Schema Registry on CFK. Platform compoents and resources are managed by GitOps with declarative YAML, FluxCD, and the Operator Pattern.

Diagram

solution_diagram

Examples

Getting Started

Requirements

Pre-Deployment Tasks

  • Login to your AWS account and create a new user with the username `osdCcsAdmin`` and attached administrator permissions. NOTE: The username has to be exactly that value. Enable programmatically access and download the access key and secret - you will need them in part 1.
  • Fork confluent-openshift-gitops-demo repository into your own GitHub account and clone to your local machine
  • Fork team-alpha-resources repository into your own GitHub account and clone to your local machine

1. Create a OpenShift cluster

Login and navigate to the Clusters section in the Red Hat Console. Here you can create a trial cluster on AWS or GCP, for this example we will be using AWS:

  • Click Create trial cluster and select AWS.
  • View and accept the terms and conditions. (this is mandatory to proceed)
  • Complete the form with mandatory information (AWS account ID / Access key and secret / Cluster name / Region)
  • Select Single zone and 8 vCPU 32GiB RAM
  • Keep everything else as default and and click create. Kick back and make yourself a coffee or three - creating the cluster takes around ~30 minutes.

2. Connecting to the cluster

  • Navigate to the clusters section in the and select the cluster you created in step 1. Before you can login, we need to configure an OAuth provider.
  • Select the Access control tab and then Identity providers. Click Add identity provider and select HTPasswd. This should present you with an auto generated set of credentials. Node these down as you will need them to obtain the token which authenticates you against the control plane API.
  • Next you will need to manually give the admin-xxx user Cluster administrative access. Click Cluster Roles and Access, Add user. Paste the newly created admin-xxx user in and select cluster-admins. Click add user.
  • Now we have setup the auth provider we can log into the cluster. Click Open console on the top right. This will open a new window, where you can enter in the credentials from the previous step.
  • Once logged into the cluster you should ensure you are running in Administrator mode, which you can select at the top left. Now we can configure out OC CLI tool to connect to our cluster. At the top right of the console click admin-xxx and then select Copy login command.
  • This will present you with (another) login page, you need to authenticate again with the same credentials. Once successful you will be presented with a Display Token link, click that which will display a command that looks something like this:
      oc login --token=sha256~6CUsCy7sFt3-_xxxxxxxxxxxxxxxxxxxxxxxxx --server=https://api.xxx-demo.3wg2.p1.openshiftapps.com:6443
  • Copy the full command and open your terminal window. Paste and run this command and away you go. It should look something like this:
      Logged into "https://api.xxx-demo.3wg2.p1.openshiftapps.com:6443" as "admin-TQ0NzM" using the token provided.
    
      You don't have any projects. You can try to create a new project, by running
    
      oc new-project <projectname>
  • You can verify you have the correct permissions but running:
    ~ kubectl get pods -A | wc -l
           314

3. Installing Flux and deploying Confluent operator

  • Flux required additional permissions on the cluster. To enable these run the following:
      oc adm policy add-scc-to-user privileged system:serviceaccount:flux-system:source-controller
      oc adm policy add-scc-to-user privileged system:serviceaccount:flux-system:kustomize-controller
      oc adm policy add-scc-to-user privileged system:serviceaccount:flux-system:image-automation-controller
      oc adm policy add-scc-to-user privileged system:serviceaccount:flux-system:image-reflector-controller
  • In your terminal, change directory into your confluent-openshift-gitops-demo folder (this the project you forked and cloned from GitLab) and run the following to bootstrap and install flux
      export GITHUB_TOKEN=<<YOUR GITHUB TOKEN>>
      export GITHUB_USER=<<YOUR GITHUB USERNAME>>
      export GITHUB_REPO=git@github.com:<<YOUR GITHUB USERNAME>>/confluent-openshift-gitops-demo.git
      export GITHUB_REPO=confluent-openshift-gitops-demo
      
      flux bootstrap github \
        --owner=${GITHUB_USER} \
        --repository=${GITHUB_REPO} \
        --branch=main \
        --personal \
        --path=cluster-manifests/clusters/ocp
  • This will install the Flux toolkit and also reconcile the cluster against the kustomize's templates which are contained in the repository. The output should look something like this:
      ► connecting to github.com
      ► cloning branch "main" from Git repository "https://github.com/osodevops/confluent-openshift-gitops-demo.git"
      ✔ cloned repository
      ► generating component manifests
      ✔ generated component manifests
      ✔ committed sync manifests to "main" ("547256db4759a0d2fb3ee377ab9be966e508de4b")
      ► pushing component manifests to "https://github.com/osodevops/confluent-openshift-gitops-demo.git"
      ✔ installed components
      ✔ reconciled components
      ► determining if source secret "flux-system/flux-system" exists
      ► generating source secret
      ✔ public key: ssh-rsa xxx
      ✔ configured deploy key "flux-system-main-flux-system-./cluster-manifests/clusters/ocp2" for "https://github.com/osodevops/confluent-openshift-gitops-demo"
      ► applying source secret "flux-system/flux-system"
      ✔ reconciled source secret
      ► generating sync manifests
      ✔ generated sync manifests
      ✔ committed sync manifests to "main" ("740cb6dcad75341ae342b479d9cd1600deb8afc4")
      ► pushing sync manifests to "https://github.com/osodevops/confluent-openshift-gitops-demo.git"
      ► applying sync manifests
      ✔ reconciled sync configuration
      ◎ waiting for Kustomization "flux-system/flux-system" to be reconciled
      ✔ Kustomization reconciled successfully
      ► confirming components are healthy
      ✔ helm-controller: deployment ready
      ✔ kustomize-controller: deployment ready
      ✔ notification-controller: deployment ready
      ✔ source-controller: deployment ready
      ✔ all components are healthy
  • Now flux is installed we need to create a Kustomize to install the Confluent Operator. Firstly you will need to pull the remote changes Flux has committed a bunch of stuff to your repository so you need to run the following:
      ➜  confluent-openshift-gitops-demo git:(main) ✗ git pull
  • Create a operators.yaml in the confluent-openshift-gitops-demo/cluster-manifests/clusters/ocp folder and paste in the following:
      apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
      kind: Kustomization
      metadata:
        name: infrastructure
        namespace: flux-system
      spec:
        interval: 10m0s
        sourceRef:
          kind: GitRepository
          name: flux-system
        path: ./cluster-manifests/operators
        prune: true
  • Edit / Update the confluent-openshift-gitops-demo/cluster-manifests/clusters/ocp/kustomization.yaml to include the new kustomize definition:
      apiVersion: kustomize.config.k8s.io/v1beta1
      kind: Kustomization
      resources:
        - gotk-components.yaml
        - gotk-sync.yaml
        - operators.yaml
  • Add and commit both of these files back to the main branch in GitLab. You can either wait 1 minute for the cluster to automatically sync or manually trigger it using:
      flux reconcile kustomization flux-system --with-source
  • Once this is complete you will have successfully installed the Confluent Operator into the cluster wider namespace. This means that all users on the platform can now leverage and deploy Confluent Kafka. You can also navigate to the Operators section in the cluster console and select Installed Operators. There you can filter for Confluent and you should see its successfully installed.
  • Confluent Operator will require a service account which the namespaces will use in order to create the Confluent CRD based resources. The policy has been included in the repository , we now just need to add and link that to the service account using the following command:
      oc apply -f ./dwp-ocp-demo/policy/confluent-security-context.yaml
      oc adm policy add-scc-to-user confluent-operator -z confluent-for-kubernetes -n team-alpha

4. Deploying your first Kafka cluster

  • Now that Confluent Operator is installed and monitoring all namespaces in the cluster we can get to the exciting stuff. Navigate to the other repository you cloned team-alpha-resources This repo contains everything Team Alpha need to get up and running their own, self contained deployment of Kafka and all its dependencies.
  • Flux will do all of the hard work for us, we just need to add another git repository for it to monitor. We can do this using the following commands:
      flux create source git team-alpha-resouces \
        --url=https://github.com/osodevops/confluent-openshift-team-alpha-resources \
        --branch=main
  • Now you have attached a new source to the Flux-source-controller we just need to create the kustomize, todo this run the following:
      flux create kustomization team-alpha-resouces \
        --source=GitRepository/team-alpha-resouces \
        --path="./" \
        --prune=true \
        --interval=1m \
        --namespace=flux-system
  • You have now successfully deployed a new Kafka cluster with the following components:
      team-alpha                                         connect-0                                                             1/1     Running     0               14m
      team-alpha                                         kafka-0                                                               1/1     Running     0               14m
      team-alpha                                         kafka-1                                                               1/1     Running     0               14m
      team-alpha                                         kafka-2                                                               1/1     Running     0               14m
      team-alpha                                         schemaregistry-0                                                      1/1     Running     0               4m25s
      team-alpha                                         zookeeper-0                                                           1/1     Running     0               14m
      team-alpha                                         zookeeper-1                                                           1/1     Running     0               14m
      team-alpha                                         zookeeper-2                                                           1/1     Running     0               14m

5. Creating Topics and installing Kafka Connect connectors

  • Teams who are consuming Kafka on OCP have the ability to create / update and delete topics, connectors and even schemas. We have included some sample in the team-alpha-resources repository.
  • The YAML specification the KafkaTopic resource must follow can be found here. To create a new topic, simply create the yaml file in confluent-openshift-team-alpha-resources/topics and update the kustomization.yaml to include any additions. NOTE: you can add any custom config you want by configs element.
  • Kafka Connect is slightly more complicated than the KafkaTopic resource, firstly we can see that we have a kafka-connect cluster already running. The only connector installed is the Confluent Replicator this is purely for this demo as you will need to install which ever connectors you require. We have included a sample which is disabled (commented out) for you to use as a reference point. Simply comment this in if you want to deploy this basic example.

6. Preform Confluent Platform upgrade

  • There are two components in which you will need to upgrade, the Operator itself together with its CRDs and the other being the platform components (Kafka / Zookeeper Docker images)
  • Upgrading Confluent Operator: This is installed by Helm, OCP takes care of this for us by installing the latest 2.2.0 If required you can set the specific version in the dwp-ocp-demo/operators/confluent.yaml
  • Upgrading Confluent Component Docker Images: The image tags for each component are specified in the CRD component. See the following example of installing 6.2.2:
      image:
        application: confluentinc/cp-server:6.2.2
        init: confluentinc/confluent-init-container:2.2.0
  • Performing any of the above will perform a rolling update of each statefulset with the Confluent operator taking care of the order and movement of the underlying persistent volumes.

Related Projects

Check out these related projects.

Need some help

File a GitHub issue, send us an email or tweet us.

The legals

Copyright © 2017-2021 OSO | See LICENCE for full details.

OSO who we are

Who we are

We at OSO help teams to adopt emerging technologies and solutions to boost their competitiveness, operational excellence and introduce meaningful innovations that drive real business growth. Our developer-first culture, combined with our cross-industry experience and battle-tested delivery methods allow us to implement the most impactful solutions for your business.

Looking for support applying emerging technologies in your business? We’d love to hear from you, get in touch by email

Start adopting new technologies by checking out our other projects, follow us on twitter, join our team of leaders and challengers, or contact us to find the right technology to support your business.Beacon

Releases

No releases published

Packages

No packages published