Skip to content

Latest commit

 

History

History

service-binding-kafka

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 

Binding OpenShift applications to OpenShift Streams for Apache Kafka

As a developer of applications and services, you can connect applications deployed on a Kubernetes platform such as Red Hat OpenShift to Kafka instances created in OpenShift Streams for Apache Kafka.

For example, suppose you have the following applications deployed on OpenShift:

  • One application that publishes price updates for a variety of stocks

  • A second application that consumes the price updates for publication on a web page

In addition, you have a Kafka instance in Streams for Apache Kafka. Each time the first application produces a price update, you want to use the Kafka instance to forward the update as an event to the second, consuming application.

To achieve this behavior, you need a way to connect the applications to your Kafka instance in Streams for Apache Kafka.

You can use a specialized Operator called the Service Binding Operator to automatically provide an application on Kubernetes with the parameters required to connect to a Kafka instance in Streams for Apache Kafka. This process is called service binding.

This guide describes how to perform service binding. The Kubernetes platform referred to in the remainder of this guide is Red Hat OpenShift.

About service binding

You can use a specialized Operator called the Service Binding Operator to automatically provide an application on OpenShift with the parameters required to connect to a Kafka instance in OpenShift Streams for Apache Kafka. This process is called service binding. To perform service binding, you must also install the Red Hat OpenShift Application Services (RHOAS) Operator.

The RHOAS Operator exposes a Kafka instance to an OpenShift cluster. The Service Binding Operator collects and shares the information that an application running on the OpenShift cluster needs to connect to the Kafka instance.

When the RHOAS Operator and Service Binding Operator are installed, you can use the RHOAS command-line interface (CLI) or OpenShift web console to perform service binding. When connection between your application and Kafka instance is established, you can then work directly with the Kafka instance using standard OpenShift features and APIs.

When you perform service binding, the Service Binding Operator injects connection parameters for the Kafka instance into the pod for your application, as files. The Service Binding Operator creates the following directory and file structure in the application pod:

Files injected by the Service Binding Operator
/bindings/<kafka-instance-name>
├── bootstrapServers
├── password
├── provider
├── saslMechanism
├── securityProtocol
├── type
└── user

Each file injected by the Service Binding Operator contains a connection parameter specified in plain text. The following list describes the parameters.

bootstrapServers

Bootstrap server endpoint for the Kafka instance.

password

Password for connection to the Kafka instance.

provider

Cloud provider for the Kafka instance.

saslMechanism

Simple Authentication and Security Layer (SASL) mechanism used by the Kafka instance for client authentication.

securityProtocol

Protocol used by the Kafka instance to secure client connections.

type

Metadata that identifies the Red Hat OpenShift Application Services (RHOAS) service. For a Kafka instance in Streams for Apache Kafka, this is set to a value of kafka.

user

User name for connection to the Kafka instance.

Installing the Service Binding Operator on OpenShift

Before you can bind Kafka instances in OpenShift Streams for Apache Kafka to applications on OpenShift, you need to install the Service Binding Operator on your OpenShift cluster. The following procedure shows how to use the OperatorHub interface in the OpenShift web console to install the Service Binding Operator.

Prerequisites
  • You can access your OpenShift cluster with the dedicated-admin role (OpenShift Dedicated) or cluster-admin role. Only these roles have privileges to install an Operator on a cluster.

Procedure
  1. Log in to the OpenShift web console with the dedicated-admin role (OpenShift Dedicated) or cluster-admin role.

  2. Click the perspective switcher in the upper-left corner. Switch to the Administrator perspective.

  3. In the left menu, click Operators > OperatorHub.

  4. In the Filter by keyword field, enter Service Binding.

  5. In the filtered results, click Service Binding Operator.

    An information sidebar for the Service Binding Operator opens.

  6. In the sidebar, review the information about the Service Binding Operator and click Install.

  7. On the Install Operator page, perform the following actions:

    1. For the Update channel option, ensure that beta is selected.

    2. For the Installation mode option, ensure that All namespaces on the cluster is selected.

    3. For the Installed Namespace and Update approval options, keep the default values.

    4. Click Install.

  8. When the installation process is finished, click View Operator to see the Operator details.

    The Operator details page for the Service Binding Operator opens in the Installed Operators section of the web console.

    On the Operator details page, the Status field shows a value of Succeeded.

    Also, you can observe that the Service Binding Operator is installed in the openshift-operators namespace.

Installing the RHOAS Operator on OpenShift

Before you can bind Kafka instances in OpenShift Streams for Apache Kafka to applications on OpenShift, you need to install the Red Hat OpenShift Application Services (RHOAS) Operator on your OpenShift cluster. The following procedure shows how to use the OperatorHub interface in the OpenShift web console to install the RHOAS Operator.

Prerequisites
  • You can access your OpenShift cluster with the dedicated-admin role (OpenShift Dedicated) or cluster-admin role. Only these roles have privileges to install an Operator on a cluster.

Procedure
  1. Log in to the OpenShift web console with the dedicated-admin role (OpenShift Dedicated) or cluster-admin role.

  2. Click the perspective switcher in the upper-left corner. Switch to the Administrator perspective.

  3. In the left menu, click Operators > OperatorHub.

  4. In the Filter by keyword field, enter RHOAS.

  5. In the filtered results, select the OpenShift Application Services (RHOAS) Operator.

  6. If you see a dialog box entitled Show community Operator, review the included information. When you have finished, click Continue.

    An information sidebar for the RHOAS Operator opens.

  7. In the sidebar, review the information about the RHOAS Operator and click Install.

  8. On the Install Operator page, perform the following actions:

    1. For the Installation mode option, ensure that All namespaces on the cluster is selected.

    2. For the Update channel, Installed Namespace, and Update approval options, keep the default values.

    3. Click Install.

  9. When the installation process is finished, click View Operator to see the Operator details.

    The Operator details page for the RHOAS Operator opens in the Installed Operators section of the web console.

    On the Operator details page, the Status field shows a value of Succeeded.

    Also, you can observe that the RHOAS Operator is installed in the openshift-operators namespace.

Verifying connection to your OpenShift cluster

After you install the RHOAS Operator, you can verify that the Operator is working by using the RHOAS CLI to connect to your OpenShift cluster and retrieve the cluster status. The following example shows how to verify connection to your OpenShift cluster.

Prerequisites
  • The RHOAS Operator is installed on your OpenShift cluster.

  • You can access your OpenShift cluster with privileges to create a new project.

  • You have installed the OpenShift CLI.

  • You have installed the latest version of the RHOAS CLI (see Installing and configuring the rhoas CLI).

Procedure
  1. On your computer, open a command-line window.

  2. Log in to the OpenShift CLI using a token.

    1. Log in to the OpenShift web console as a user who has privileges to create a new project in the cluster.

    2. In the upper-right corner of the console, click your user name. Select Copy login command.

      A new page opens.

    3. Click the Display Token link.

    4. In the section entitled Log in with this token, copy the full oc login command shown.

    5. On the command line, paste the login command you copied. Right-click on the command line and select Paste.

      You see output confirming that you are logged in to your OpenShift cluster and the current project that you are using.

  3. On the command line, use the OpenShift CLI to create a new project, as shown in the following example:

    Example OpenShift CLI command to create new project
    $ oc new-project my-project
  4. Log in to the RHOAS CLI.

    RHOAS CLI login command
    $ rhoas login

    The login command opens a sign-in process in your web browser.

  5. On the command line, use the RHOAS CLI to connect to your OpenShift cluster and retrieve the cluster status.

    RHOAS CLI command to retrieve status of OpenShift cluster
    $ rhoas cluster status
    Namespace: my-project
    RHOAS Operator: Installed

    As shown in the output, the RHOAS CLI indicates that the RHOAS Operator was successfully installed. The CLI also retrieves the name of the current OpenShift project (namespace).

Connecting a Kafka instance to your OpenShift cluster

When you have verified connection to your OpenShift cluster, you can connect a Kafka instance in OpenShift Streams for Apache Kafka to the current project in the cluster. You must establish this connection before you can bind applications running in the project to the Kafka instance. The following example shows how to use the RHOAS CLI to connect a specified Kafka instance to a project in your cluster.

Prerequisites
Procedure
  1. If you are not already logged in to the OpenShift CLI, log in using a token, as described in Verifying connection to your OpenShift cluster.

  2. Log in to the RHOAS CLI.

    RHOAS CLI login command
    $ rhoas login
  3. Use the OpenShift CLI to specify the current OpenShift project. Specify the project that you created when verifying connection to your OpenShift cluster, as shown in the following example:

    Example OpenShift CLI command to specify current project
    $ oc project my-project
  4. Use the RHOAS CLI to connect a Kafka instance in Streams for Apache Kafka to the current project in your OpenShift cluster.

    RHOAS CLI command to connect Kafka instance to OpenShift cluster
    $ rhoas cluster connect

    You are prompted to specify the service that you want to connect to OpenShift.

  5. Use the up and down arrows on your keyboard to highlight the kafka service. Press Enter.

    You are prompted to specify the Kafka instance that you want to connect to OpenShift.

  6. If you have more than one Kafka instance, use the up and down arrows on your keyboard to highlight the instance that you want to connect to OpenShift. Press Enter.

    You see output like the following example:

    Example connection details
    Connection Details:
    
    Service Type: kafka
    Service Name: my-kafka-instance
    Kubernetes Namespace:  my-project
    Service Account Secret: rh-cloud-services-service-account
  7. Verify the connection details shown by the RHOAS CLI. When you are ready to continue, type y and then press Enter.

    You are prompted to provide an access token. The RHOAS Operator requires this token to connect to your Kafka instance.

  8. In your web browser, open the OpenShift Cluster Manager API Token page.

  9. On the OpenShift Cluster Manager API Token page, click Load token. When the page is refreshed, copy the API token shown.

  10. On the command line, right-click and select Paste. Press Enter.

    The RHOAS Operator uses the API token to create a KafkaConnection object on your OpenShift cluster. When this process is complete, you see output like the following example:

    Example output from rhoas cluster connect command
    Service Account Secret "rh-cloud-services-service-account" created successfully
    Client ID: <client_id>
    ...
    KafkaConnection resource "my-kafka-instance" has been created
    Waiting for status from KafkaConnection resource.
    Created KafkaConnection can be injected into your application.
    ...
    KafkaConnection successfully installed on your cluster.

    As shown in the output, the RHOAS Operator creates a new service account to access your Kafka instance in Streams for Apache Kafka. The Operator stores the service account information in a secret.

    The RHOAS Operator also creates a KafkaConnection object for your Kafka instance, which connects the instance to the OpenShift cluster. When you bind your Kafka instance to an application on OpenShift, the Service Binding Operator uses the KafkaConnection object to provide the application with the necessary connection information for the instance. Binding an application to your Kafka instance is described later in this guide.

  11. Set Access Control List (ACL) permissions to enable the new service account created by the RHOAS Operator to access resources in your Kafka instance. To set permissions, use the Client ID value for the service account.

    RHOAS CLI command to set access permissions for service account
    $ rhoas kafka acl grant-access --consumer --producer \
        --service-account <client_id> --topic "*" --group "*"
    
    The following ACL rules are to be created:
    
      PRINCIPAL (7)   PERMISSION         DESCRIPTION
      --------------  ----------------   -------------
      <client_id>     ALLOW | DESCRIBE   TOPIC is "*"
      <client_id>     ALLOW | READ       TOPIC is "*"
      <client_id>     ALLOW | READ       GROUP is "*"
      <client_id>     ALLOW | WRITE      TOPIC is "*"
      <client_id>     ALLOW | CREATE     TOPIC is "*"
      <client_id>     ALLOW | WRITE      TRANSACTIONAL_ID is "*"
      <client_id>     ALLOW | DESCRIBE   TRANSACTIONAL_ID is "*"
    
    ? Are you sure you want to create the listed ACL rules (y/N) Yes
    ✔️ ACLs successfully created in the Kafka instance "my-kafka-instance"

    The command you entered allows applications to create topics in the instance, to produce and consume messages in any topic in the instance, and to use any consumer group.

  12. Use the OpenShift CLI to verify that the RHOAS Operator successfully created the connection.

    OpenShift CLI command to verify Operator connection to cluster
    $ oc get KafkaConnection
    
    NAME   		         AGE
    my-kafka-instance    2m35s

    As shown in the output, when you use the rhoas cluster connect command, the RHOAS Operator creates a KafkaConnection object that matches the name of your Kafka instance. In this example, the object name matches a Kafka instance called my-kafka-instance.

Binding a Quarkus application to OpenShift Streams for Apache Kafka using the RHOAS CLI

When the RHOAS Operator is installed on your OpenShift cluster and you have connected a Kafka instance to the cluster, you can use the RHOAS CLI to instruct the Service Binding Operator to automatically inject an application running on the cluster with the parameters required to connect to the Kafka instance. This process is called service binding.

The following tutorial shows how to use the RHOAS CLI to perform service binding. In the tutorial, you create an example Quarkus application and connect this to a Kafka instance. Quarkus is a Kubernetes-native Java framework that is optimized for serverless, cloud, and Kubernetes environments.

When you perform service binding, the Service Binding Operator automatically injects connection parameters as files into the pod for the application. The example Quarkus application in this tutorial uses the quarkus-kubernetes-service-binding extension. This means that the application automatically detects and uses the injected connection parameters.

In general, this automatic injection and detection of connection parameters eliminates the need to manually configure an application to connect to a Kafka instance in OpenShift Streams for Apache Kafka. This is a particular advantage if you have many applications in your project that you want to connect to a Kafka instance.

Prerequisites

  • The Service Binding Operator is installed on your OpenShift cluster.

  • The RHOAS Operator is installed on your OpenShift cluster and you have verified connection to the cluster.

  • You have connected a Kafka instance to a project in your OpenShift cluster.

Deploying an example Quarkus application on OpenShift

In this step of the tutorial, you deploy an example Quarkus application in the OpenShift project that you previously connected your Kafka instance to.

The Quarkus application generates random numbers between 0 and 100 and produces those numbers to a Kafka topic. Another part of the application consumes the numbers from the Kafka topic. Finally, the application uses server-sent events to expose the numbers as a REST UI. A web page in the application displays the exposed numbers.

The example Quarkus application uses the quarkus-kubernetes-service-binding extension, which means that the application automatically detects and uses the injected connection parameters. This eliminates the need for manual configuration of the application.

Prerequisites
  • You have privileges to deploy applications in the OpenShift project that you connected your Kafka instance to.

Procedure
  1. If you are not already logged in to the OpenShift CLI, log in using a token, as described in Verifying connection to your OpenShift cluster. Log in as the same user who verified connection to the cluster.

  2. Use the OpenShift CLI to ensure that the current OpenShift project is the one that you previously connected your Kafka instance to, as shown in the following example:

    Example OpenShift CLI command to specify current OpenShift project
    $ oc project my-project
  3. To deploy the Quarkus application, apply an example application template provided by Streams for Apache Kafka.

    OpenShift CLI command to deploy example Quarkus application
    $ oc apply -f https://raw.githubusercontent.com/redhat-developer/app-services-guides/main/code-examples/quarkus-kafka-quickstart/.kubernetes/kubernetes.yml
    
    service/rhoas-quarkus-kafka created
    deployment.apps/rhoas-quarkus-kafka created
    route.route.openshift.io/rhoas-quarkus-kafka created

    As shown in the output, when you deploy the application, OpenShift creates a service and route for access to the application.

  4. Get the URL of the route created for the application.

    OpenShift CLI command to get route for application
    $ oc get route
    
    NAME                   HOST/PORT
    rhoas-quarkus-kafka    rhoas-quarkus-kafka-my-project.apps.sandbox-m2.ll9k.p1.openshiftapps.com
  5. On the command line, highlight the URL shown under HOST/PORT. Right-click and select Copy.

  6. In your web browser, paste the URL for the route. Ensure that the URL includes http://.

    A web page for the Quarkus application opens.

  7. In your web browser, append /prices.html to the URL.

    A new web page entitled Last price opens. Because you haven’t yet connected the Quarkus application to your Kafka instance, the price value appears as N/A.

Creating the prices topic in your Kafka instance

In the previous step of this tutorial, you deployed an example application on OpenShift. The application is a Quarkus application that uses a Kafka topic called prices to produce and consume messages. In this step, you create the prices topic in your Kafka instance.

Prerequisites
Procedure
  1. On the Kafka Instances page of the Streams for Apache Kafka web console, click the name of the Kafka instance that you want to add a topic to.

  2. Select the Topics tab and click Create topic. On the topic creation page shown in the figure, follow the guided steps to define the details of the prices topic. Click Next to complete each step and click Finish to complete the setup.

    Image of wizard to create prices topic
    Figure 1. Guided steps to define topic

The following list describes the topic properties that you must specify.

Topic name

Enter prices as the topic name.

Partitions

Set the number of partitions for this topic. For this tutorial, set a value of 1. Partitions are distinct lists of messages within a topic and enable parts of a topic to be distributed over multiple brokers in the cluster. A topic can contain one or more partitions, enabling producer and consumer loads to be scaled.

Note
You can increase the number of partitions later, but you cannot decrease them.
Message retention

Set the message retention time to the relevant value and increment. For this tutorial, set a value of A week. Message retention time is the amount of time that messages are retained in a topic before they are deleted or compacted, depending on the cleanup policy.

Replicas

For this release of Streams for Apache Kafka, the replicas are preconfigured. The number of partition replicas for the topic is set to 3 and the minimum number of follower replicas that must be in sync with a partition leader is set to 2. Replicas are copies of partitions in a topic. Partition replicas are distributed over multiple brokers in the cluster to ensure topic availability if a broker fails. When a follower replica is in sync with a partition leader, the follower replica can become the new partition leader if needed.

After you complete the topic setup, the new Kafka topic is listed in the topics table.

Binding the Quarkus application to your Kafka instance using the RHOAS CLI

In this step of the tutorial, you use the RHOAS CLI to bind the example Quarkus application that you deployed on OpenShift to your Kafka instance. When you perform this binding, the Service Binding Operator injects connection parameters as files into the pod for the application. The Quarkus application automatically detects and uses the connection parameters to bind to the Kafka instance.

Prerequisites
  • You understand how the Service Binding Operator injects connection parameters as files into the pod for a client application. See About service binding.

  • The Service Binding Operator is installed on your OpenShift cluster.

  • The RHOAS Operator is installed on your OpenShift cluster and you have verified connection to the cluster.

  • You have connected a Kafka instance to a project in your OpenShift cluster.

  • You have deployed the example Quarkus application.

  • You have created the Kafka topic required by the Quarkus application.

Procedure
  1. If you are not already logged in to the OpenShift CLI, log in using a token, as described in Verifying connection to your OpenShift cluster. Log in as the same user who verified connection to the cluster.

  2. Log in to the RHOAS CLI.

    RHOAS CLI login command
    $ rhoas login
  3. Use the OpenShift CLI to ensure that the current OpenShift project is the one that you previously connected your Kafka instance to, as shown in the following example:

    Example OpenShift CLI command to specify current OpenShift project
    $ oc project my-project
  4. Use the RHOAS CLI to instruct the Service Binding Operator to bind your Kafka instance to an application in your OpenShift project.

    RHOAS CLI command to bind Kafka instance to application in OpenShift
    $ rhoas cluster bind

    You are prompted to specify the Kafka instance that you want to bind to an application in your OpenShift project.

  5. If you have more than one Kafka instance, use the up and down arrows on your keyboard to highlight the instance that you want to bind to an application in OpenShift. Press Enter.

    You are prompted to specify the application that you want to bind your Kafka instance to.

  6. If you have more than one application in your OpenShift project, use the up and down arrows on your keyboard to highlight the rhoas-quarkus-kafka example application. Press Enter.

  7. Type y to confirm that you want to continue. Press Enter.

    When binding is complete, you should see output like the following:

    Example output from binding Kafka instance to application in OpenShift
    Using Service Binding Operator to perform binding
    Binding my-kafka-instance with rhoas-quarkus-kafka app succeeded

    The output shows that the RHOAS CLI successfully instructed the Service Binding Operator to bind a Kafka instance called my-kafka-instance to the example Quarkus application called rhoas-quarkus-kafka. The Quarkus application automatically detected the connection parameters injected by the Service Binding Operator and used them to bind with the Kafka instance.

    When service binding is complete, OpenShift redeploys the Quarkus application. When the application is running again, it starts to use the prices Kafka topic that you created in your Kafka instance. One part of the Quarkus application publishes price updates to this topic, while another part of the application consumes the updates.

  8. To verify that the Quarkus application is using the Kafka topic, reopen the Last price web page that you opened earlier in this tutorial.

    On the Last price web page, observe that the price value is continuously updated. The updates show that the Quarkus application is now using the prices topic in your Kafka instance to produce and consume messages that correspond to price updates.

    Note
    You can also use the OpenShift Streams for Apache Kafka web console to browse messages in the Kafka topic. For more information, see Browsing messages in the OpenShift Streams for Apache Kafka web console.

Binding a Node.js application to OpenShift Streams for Apache Kafka using the OpenShift web console

When the RHOAS Operator is installed on your OpenShift cluster and you have connected a Kafka instance to the cluster, you can use the OpenShift web console to instruct the Service Binding Operator to automatically inject an application running on the cluster with the parameters required to connect to the Kafka instance. This process is called service binding.

The following tutorial shows how to use the OpenShift web console to perform service binding. In the tutorial, you create an example Node.js application and connect this to a Kafka instance. Node.js is a server-side JavaScript runtime that’s designed to build scalable network applications. Node.js provides an I/O model based on events and non-blocking operations, which enables efficient applications.

When you perform service binding, the Service Binding Operator automatically injects connection parameters as files into the pod for the application. The example Node.js application in this tutorial uses the kube-service-bindings package. This means that the application automatically detects the injected connection parameters and converts the information into the format used by two popular Node.js clients; KafkaJS and node-rdkafka.

In general, this automatic injection and detection of connection parameters eliminates the need to manually configure an application to connect to a Kafka instance in OpenShift Streams for Apache Kafka. This is a particular advantage if you have many applications in your project that you want to connect to a Kafka instance.

Prerequisites

  • Your OpenShift cluster is running on OpenShift 4.8 or later.

  • The Service Binding Operator is installed on your OpenShift cluster.

  • The RHOAS Operator is installed on your OpenShift cluster and you have verified connection to the cluster.

  • You have connected a Kafka instance to a project in your OpenShift cluster.

Deploying an example Node.js application on OpenShift

In this step of the tutorial, you deploy an example Node.js application in the OpenShift project that you previously connected your Kafka instance to.

To deploy the example application, you use sample code from the Nodeshift Application Starters reactive example repository in GitHub. In particular, you install the following components of the Node.js application:

  • A producer-backend component that generates random country names and sends these names to a topic in your Kafka instance.

  • A consumer-backend component that consumes the country names from the Kafka topic.

Prerequisites
  • You have privileges to deploy applications in the OpenShift project that you connected your Kafka instance to.

Procedure
  1. Log in to the OpenShift web console with privileges to deploy applications in the project that you previously connected your Kafka instance to.

  2. Click the perspective switcher in the upper-left corner. Switch to the Developer perspective.

    The Topology page opens.

  3. Ensure that the current OpenShift project is the one you previously connected your Kafka instance to.

    1. At the top of the Topology page, click the Project list.

    2. Select the project that you previously connected your Kafka instance to.

  4. If you are not already logged in to the OpenShift CLI, log in using a token, as described in Verifying connection to your OpenShift cluster. Log in as the same user who verified connection to the cluster.

  5. On the command line, clone the Nodeshift Application Starters reactive-example repository from GitHub.

    Git command to clone reactive-example repository
    $ git clone https://github.com/nodeshift-starters/reactive-example.git
  6. Navigate to the reactive-example directory of the repository that you cloned.

    $ cd reactive-example
  7. Navigate to the directory for the consumer component. Use Node Package Manager (npm) to install the dependencies for this component.

    Installation of dependencies for consumer component
    $ cd consumer-backend
    $ npm install
  8. Build the consumer component and deploy it to your OpenShift project.

    Deployment of consumer component to OpenShift
    $ npm run openshift
  9. In the OpenShift web console, ensure that you are on the Topology page.

    You should see an icon for the consumer component that you deployed. The component is a DeploymentConfig object and is labelled DC. After some time, OpenShift completes the deployment.

  10. Click the icon for the consumer component.

    A sidebar opens with the Resources tab displayed. Under Pods, you should see a single pod.

  11. Next to the name of the pod, click View logs.

    In the logs of the pod for the consumer component, you should see errors indicating that the component can’t connect to Kafka. You will establish this connection later in this tutorial.

  12. On the command line, in the repository that you cloned, navigate to the directory for the producer component. Use Node Package Manager to install the dependencies for this component.

    Installation of dependencies for producer component
    $ cd ..
    $ cd producer-backend
    $ npm install
  13. Build the producer component and deploy it to your OpenShift project.

    Deployment of producer component to OpenShift
    $ npm run openshift

    On the Topology page of the OpenShift web console, you should see an icon for the producer component that you deployed. The producer component is also a DeploymentConfig object and labelled DC. After some time, OpenShift completes the deployment.

  14. Open the logs of the pod for the producer component, in the same way that you did for the consumer component.

    In the logs, you should see errors indicating that the producer component can’t connect to Kafka. You will also establish this connection later in this tutorial.

Creating the countries topic in your Kafka instance

In the previous step of this tutorial, you deployed an example application on OpenShift. The application is a Node.js application that uses a Kafka topic called countries to produce and consume messages. In this step, you will create the countries topic in your Kafka instance.

Prerequisites
Procedure
  1. On the Kafka Instances page of the Streams for Apache Kafka web console, click the name of the Kafka instance that you want to add a topic to.

  2. Select the Topics tab and click Create topic. On the topic creation page shown in the figure, follow the guided steps to define the details of the countries topic. Click Next to complete each step and click Finish to complete the setup.

    Image of wizard to create countries topic
    Figure 2. Guided steps to define topic

    The following list describes the topic properties that you must specify.

    Topic name

    Enter countries as the topic name.

    Partitions

    Set the number of partitions for this topic. For this tutorial, set a value of 1. Partitions are distinct lists of messages within a topic and enable parts of a topic to be distributed over multiple brokers in the cluster. A topic can contain one or more partitions, enabling producer and consumer loads to be scaled.

    Note
    You can increase the number of partitions later, but you cannot decrease them.
    Message retention

    Set the message retention time to the relevant value and increment. For this tutorial, set a value of A week. Message retention time is the amount of time that messages are retained in a topic before they are deleted or compacted, depending on the cleanup policy.

    Replicas

    For this release of Streams for Apache Kafka, the replicas are preconfigured. The number of partition replicas for the topic is set to 3 and the minimum number of follower replicas that must be in sync with a partition leader is set to 2. Replicas are copies of partitions in a topic. Partition replicas are distributed over multiple brokers in the cluster to ensure topic availability if a broker fails. When a follower replica is in sync with a partition leader, the follower replica can become the new partition leader if needed.

    After you complete the topic setup, the new Kafka topic is listed in the topics table.

Binding the Node.js application to your Kafka instance using the OpenShift web console

In this step of the tutorial, you use the OpenShift web console to bind the components of the example Node.js application that you deployed on OpenShift to your Kafka instance. When you perform this binding, the Service Binding Operator injects connection parameters as files into the pod for each component.

The example Node.js application uses the kube-service-bindings package. This means that the application automatically detects and uses the injected connection parameters.

Prerequisites
  • You understand how the Service Binding Operator injects connection parameters as files into the pod for a client application. See About service binding.

  • The Service Binding Operator is installed on your OpenShift cluster.

  • The RHOAS Operator is installed on your OpenShift cluster and you have verified connection to the cluster.

  • You have connected a Kafka instance to a project in your OpenShift cluster.

  • You have deployed the example Node.js application.

  • You have created the Kafka topic required by the Node.js application.

Procedure
  1. Ensure that you are logged in to the OpenShift web console as the same user who deployed the Node.js application earlier in this tutorial.

  2. Click the perspective switcher in the upper-left corner. Switch to the Developer perspective.

    The Topology page opens.

  3. Ensure that the current OpenShift project is the one you previously connected your Kafka instance to.

    1. At the top of the Topology page, click the Project list.

    2. Select the project that you previously connected your Kafka instance to.

      On the Topology page for your project, you should see an icon for the KafkaConnection object that was created when you connected a Kafka instance to the project. The icon for the KafkaConnection object is labelled AKC. The name of the object matches the name of the Kafka instance that you connected to the project.

      You should also see icons for the producer and consumer components of the Node.js application that you deployed. Each component is a DeploymentConfig object and is labelled DC.

  4. To start creating a service binding connection, hover the mouse pointer over the icon for the consumer component, as shown in the figure.

    Image of arrow to create a binding connection
    Figure 3. Action to start service binding connection

    An arrow with a dotted line appears from the icon.

  5. Left-click and drag the head of the arrow until it’s directly over the icon for the KafkaConnection object, as shown in the figure.

    Image of service binding tooltip
    Figure 4. Tooltip indicating type of connection to be created

    A tooltip appears over the icon for the KafkaConnection object. The tooltip indicates that you are about to create a service binding connection.

  6. To create the service binding connection, release the left mouse button, as shown in the figure.

    Image of completed binding connection
    Figure 5. Completed service binding connection

    When you create the binding connection, the Service Binding Operator injects connection parameters as files into the pod for the consumer component. The kube-service-bindings package used by the consumer component automatically detects these files and converts the information into the format required by the KafkaJS client that the component uses by default.

  7. To bind the producer component to the KafkaConnection object, drag a connection to the KafkaConnection object, in the same way that you did for the consumer component.

  8. When you have made a connection to the KafkaConnection object, click the icon for the producer component.

    A sidebar opens with the Resources tab displayed. Under Pods, you still see a single pod corresponding to the component.

  9. Next to the name of the pod, click View logs.

    You should now see that the producer has connected to the Kafka instance. The producer generates random country names and sends these as messages to the countries Kafka topic that you created.

  10. Open the logs for the pod of the consumer component, in the same way that you did for the producer component.

    You should now see that the consumer has connected to the Kafka instance. The consumer displays the same country names that the producer sends to the countries Kafka topic, and in the same order.

    Note
    You can also use the OpenShift Streams for Apache Kafka web console to browse messages in the Kafka topic. For more information, see Browsing messages in the OpenShift Streams for Apache Kafka web console.