Skip to content

Latest commit

 

History

History

service-binding-registry

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 

Binding OpenShift applications to OpenShift Streams for Apache Kafka and OpenShift Service Registry

As a developer of applications and services, you can connect applications on a Kubernetes platform such as Red Hat OpenShift to cloud services such as OpenShift Streams for Apache Kafka and OpenShift Service Registry.

For example, suppose you have the following applications deployed on OpenShift:

  • One application that publishes price updates for a variety of stocks

  • A second application that consumes the price updates for publication on a web page

In addition, suppose you have a Kafka instance in Streams for Apache Kafka. Each time the first application produces a price update, you want to use the Kafka instance to forward the update as an event to the second, consuming application. To achieve this behavior, you need a way to connect the applications on OpenShift to your Kafka instance in Streams for Apache Kafka.

In cases such as this, you can use a specialized Operator called the Service Binding Operator. The Service Binding Operator automatically provides an application on Kubernetes with the parameters required to connect to a Kafka instance in Streams for Apache Kafka. This process is called service binding.

You can also use the Service Binding Operator to connect the application to a Service Registry instance. Service Registry instances store value and key schemas (as well as APIs). When you use a schema with your Kafka instance, the schema ensures that messages conform to a specified format.

This guide describes how to perform service binding for Streams for Apache Kafka and Service Registry. The Kubernetes platform referred to in the remainder of this guide is Red Hat OpenShift.

About service binding

You can use a specialized Operator called the Service Binding Operator to automatically provide an application on OpenShift with the parameters required to connect to a specified Kafka instance in Streams for Apache Kafka. If you’re using a schema with your Kafka instance, you can also use the Service Binding Operator to provide the application with connection parameters for a Service Registry instance. Using the Service Binding Operator to automatically generate connection parameters for these cloud services is called service binding.

To perform service binding, you must also install the Red Hat OpenShift Application Services (RHOAS) Operator. The RHOAS Operator exposes a Kafka or Service Registry instance to an OpenShift cluster. The Service Binding Operator then collects and shares the information required for an application running on the OpenShift cluster to connect to the Kafka or Service Registry instance.

When the RHOAS Operator and Service Binding Operator are installed, you can use the RHOAS CLI or OpenShift web console to perform service binding. When connection between your application and Kafka or Service Registry instance is established, you can then work directly with the instance using standard OpenShift features and APIs.

As part of the service binding process, the Service Binding Operator injects connection parameters for the Kafka or Service Registry instance into the pod for your application, as files.

When you bind to a Kafka instance in Streams for Apache Kafka, the Service Binding Operator creates the following directory and file structure in the application pod:

Files injected by the Service Binding Operator for a Kakfa instance
/bindings/<kafka-instance-name>
├── bootstrapServers
├── password
├── provider
├── saslMechanism
├── securityProtocol
├── type
└── user

Each file that the Service Binding Operator injects into the application pod contains a single connection parameter, specified in plain text. The connection parameters that correspond to the injected files are described below.

bootstrapServers

Bootstrap server endpoint for the Kafka instance.

password

Password for connection to the Kafka instance.

provider

Cloud provider for the Kafka instance.

saslMechanism

Simple Authentication and Security Layer (SASL) mechanism used by the Kafka instance for client authentication.

securityProtocol

Protocol used by the Kafka instance to secure client connections.

type

Metadata that identifies the Red Hat OpenShift Application Services (RHOAS) service. For a Kafka instance in Streams for Apache Kafka, this is set to a value of kafka.

user

User name for connection to the Kafka instance.

When you bind to a Service Registry instance, the Service Binding Operator creates the following directory and file structure in the application pod:

Files injected by the Service Binding Operator for a Service Registry instance
/bindings/<registry-instance-name>
├── clientId
├── clientSecret
├── oauthRealm
├── oauthTokenUrl
├── registry
└── type

Each file that the Service Binding Operator injects into the application pod contains a single connection parameter, specified in plain text. The connection parameters that correspond to the injected files are described below. Some of these parameters are required by the OAuth (Open Authorization) protocol, which is used to secure connections to Service Registry instances.

clientId

Client ID used by OAuth to connect to the Service Registry instance.

clientSecret

Name of the secret that contains the password used by OAuth to connect to the Service Registry instance.

oauthRealm

Authentication realm used by OAuth to connect to the Service Registry instance.

oauthTokenUrl

Endpoint for the access token used by OAuth to connect to the Service Registry instance.

registry

Endpoint for the Service Registry instance.

type

Metadata that identifies the Red Hat OpenShift Application Services (RHOAS) service. For a Service Registry instance, this is set to a value of service-registry.

Installing the Service Binding Operator on OpenShift

Before you can bind Kafka or Service Registry instances to applications on OpenShift, you need to install the Service Binding Operator on your OpenShift cluster. The following procedure shows how to use the OperatorHub interface in the OpenShift web console to install the Service Binding Operator.

Prerequisites
  • You can access your OpenShift cluster with the dedicated-admin role (OpenShift Dedicated) or cluster-admin role. Only these roles have privileges to install an Operator on a cluster.

Procedure
  1. Log in to the OpenShift web console with the dedicated-admin role (OpenShift Dedicated) or cluster-admin role.

  2. Click the perspective switcher in the upper-left corner. Switch to the Administrator perspective.

  3. In the left menu, click Operators > OperatorHub.

  4. In the Filter by keyword field, enter Service Binding.

  5. In the filtered results, click Service Binding Operator.

    An information sidebar for the Service Binding Operator opens.

  6. In the sidebar, review the information about the Service Binding Operator and click Install.

  7. On the Install Operator page, perform the following actions:

    1. For the Update channel option, ensure that stable is selected.

    2. For the Installation mode option, ensure that All namespaces on the cluster is selected.

    3. For the Installed Namespace and Update approval options, keep the default values.

    4. Click Install.

  8. When the installation process is finished, click View Operator to see the Operator details.

    The Operator details page for the Service Binding Operator opens in the Installed Operators section of the web console.

    On the Operator details page, the Status field shows a value of Succeeded.

    Also, you can observe that the Service Binding Operator is installed in the openshift-operators namespace.

Installing the RHOAS Operator on OpenShift

Before you can bind Kafka or Service Registry instances to applications on OpenShift, you need to install the Red Hat OpenShift Application Services (RHOAS) Operator on your OpenShift cluster. The following procedure shows how to use the OperatorHub interface in the OpenShift web console to install the RHOAS Operator.

Prerequisites
  • You can access your OpenShift cluster with the dedicated-admin role (OpenShift Dedicated) or cluster-admin role. Only these roles have privileges to install an Operator on a cluster.

Procedure
  1. Log in to the OpenShift web console with the dedicated-admin role (OpenShift Dedicated) or cluster-admin role.

  2. Click the perspective switcher in the upper-left corner. Switch to the Administrator perspective.

  3. In the left menu, click Operators > OperatorHub.

  4. In the Filter by keyword field, enter RHOAS.

  5. In the filtered results, select the OpenShift Application Services (RHOAS) Operator.

  6. If you see a dialog box entitled Show community Operator, review the included information. When you’ve finished, click Continue.

    An information sidebar for the RHOAS Operator opens.

  7. In the sidebar, review the information about the RHOAS Operator and click Install.

  8. On the Install Operator page, perform the following actions:

    1. For the Installation mode option, ensure that All namespaces on the cluster is selected.

    2. For the Update channel, Installed Namespace, and Update approval options, keep the default values.

    3. Click Install.

  9. When the installation process is finished, click View Operator to see the Operator details.

    The Operator details page for the RHOAS Operator opens in the Installed Operators section of the web console.

    On the Operator details page, the Status field shows a value of Succeeded.

    Also, you can observe that the RHOAS Operator is installed in the openshift-operators namespace.

Verifying connection to your OpenShift cluster

After you install the RHOAS Operator, you can verify that the Operator is working by using the RHOAS CLI to connect to your OpenShift cluster and retrieve the cluster status. The following example shows how to verify connection to your OpenShift cluster.

Prerequisites
Procedure
  1. On your computer, open a command-line window.

  2. Log in to the OpenShift CLI using a token.

    1. Log in to the OpenShift web console as a user who has privileges to create a new project in the cluster.

    2. In the upper-right corner of the console, next to your user name, click the drop-down menu. Select Copy login command.

      A new page opens.

    3. Click the Display Token link.

    4. In the section entitled Log in with this token, copy the full oc login command shown.

    5. On the command line, paste the login command you copied. Right-click on the command line and select Paste.

      You see output confirming that you’re logged in to your OpenShift cluster and the current project that you’re using.

  3. On the command line, create a new project, as shown in the following example.

    Creating a new OpenShift project
    $ oc new-project my-project
  4. Log in to the RHOAS CLI.

    Logging in to the RHOAS CLI
    $ rhoas login

    The login command opens a sign-in process in your web browser.

  5. On the command line, use the RHOAS CLI to connect to your OpenShift cluster and retrieve the cluster status.

    Using the RHOAS CLI to retrieve the status of your OpenShift cluster
    $ rhoas cluster status
    RHOAS Operator: Installed
    Service Binding Operator: Installed

    As shown in the output, the RHOAS CLI indicates that the RHOAS Operator and Service Binding Operator were successfully installed.

Connecting a Kafka and Service Registry instance to your OpenShift cluster

When you’ve verified connection to your OpenShift cluster, you can connect Kafka and Service Registry instances to the current project in the cluster. You must establish these connections before you can bind applications running in the project to the Kafka and Service Registry instances.

The following procedure shows how to use the RHOAS CLI to connect a specified Kafka or Service Registry instance to a project in your cluster.

Important

Before you can bind an application running on OpenShift to your Kafka and Service Registry instances, you must connect each of these cloud services to your OpenShift cluster. Therefore, you must perform the following procedure for both services.

Prerequisites
Procedure
  1. If you’re not already logged in to the OpenShift CLI, log in using a token, as described in Verifying connection to your OpenShift cluster.

  2. Log in to the RHOAS CLI.

    Logging in to the RHOAS CLI
    $ rhoas login
  3. Use the OpenShift CLI to specify the current OpenShift project. Specify the project that you created when verifying connection to your OpenShift cluster, as shown in the following example.

    Using the OpenShift CLI to specify the current OpenShift project
    $ oc project my-project
  4. Use the RHOAS CLI to connect a Kafka or Service Registry instance to the current project in your OpenShift cluster.

    Using the RHOAS CLI to connect a Kafka or Service Registry instance to your OpenShift cluster
    $ rhoas cluster connect

    You’re prompted to specify the cloud service that you want to connect to OpenShift.

  5. Use the up and down arrows on your keyboard to highlight kafka or service-registry. Press Enter.

    Based on the service you select, you’re prompted to specify the Kafka or Service Registry instance that you want to connect to OpenShift.

  6. If you have more than one Kafka or Service Registry instance, use the up and down arrows on your keyboard to highlight the instance that you want to connect to OpenShift. Press Enter.

    The RHOAS CLI shows details for the connection that you’ll create. The following example shows connection details for a Kafka instance.

    Example connection details
    Connection Details:
    
    Service Type: kafka
    Service Name: my-kafka-instance
    Kubernetes Namespace:  my-project
    Service Account Secret: rh-cloud-services-service-account
  7. Verify the connection details shown by the RHOAS CLI. When you’re ready to continue, type y and then press Enter.

    You’re prompted to provide an access token. The RHOAS Operator requires this token to connect to your Kafka or Service Registry instance.

  8. In your web browser, open the OpenShift Cluster Manager API Token page.

  9. On the OpenShift Cluster Manager API Token page, click Load token. When the page is refreshed, copy the API token shown.

  10. On the command line, right-click and select Paste. Press Enter.

    Based on the cloud service that you previously selected, the RHOAS Operator uses the API token to create a KafkaConnection or ServiceRegistryConnection object on your OpenShift cluster.

    The following example shows output for a Kafka instance.

    Example output from rhoas cluster connect command
    Service Account Secret "rh-cloud-services-service-account" created successfully
    Client ID: <client_id>
    ...
    KafkaConnection resource "my-kafka-instance" has been created
    Waiting for status from KafkaConnection resource.
    Created KafkaConnection can be injected into your application.
    ...
    KafkaConnection successfully installed on your cluster.

    As shown in the preceding example, the RHOAS Operator creates a new service account to access the Kafka or Service Registry instance that you specified. The Operator stores the service account information in a secret.

    The RHOAS Operator also creates a KafkaConnection or ServiceRegistryConnection object for your Kafka or Service Registry instance, which connects the instance to the OpenShift cluster. When you bind your Kafka or Service Registry instance to an application on OpenShift, the Service Binding Operator uses the KafkaConnection or ServiceRegistryConnection object to provide the application with the necessary connection information for the instance. Binding an application to your Kafka or Service Registry instance is described later in this guide.

  11. Enable the new service account created by the RHOAS Operator to access the Kafka or Service Registry instance that you specified.

    1. If you connected to a Kafka instance, set Access Control List (ACL) permissions to enable the new service account to access resources in the Kafka instance.

      Setting Kafka access permissions for the service account
      $ rhoas kafka acl grant-access --consumer --producer --service-account <client_id> --topic "*" --group "*"

      You should see output like the following example:

      Example output when setting Kafka access permissions
      The following ACL rules are to be created:
      
        PRINCIPAL (7)  PERMISSION         DESCRIPTION
        -------------- ----------------   -------------
        <client_id>    ALLOW | DESCRIBE   TOPIC is "*"
        <client_id>    ALLOW | READ       TOPIC is "*"
        <client_id>    ALLOW | READ       GROUP is "*"
        <client_id>    ALLOW | WRITE      TOPIC is "*"
        <client_id>    ALLOW | CREATE     TOPIC is "*"
        <client_id>    ALLOW | WRITE      TRANSACTIONAL_ID is "*"
        <client_id>    ALLOW | DESCRIBE   TRANSACTIONAL_ID is "*"
      
      ? Are you sure you want to create the listed ACL rules (y/N) Yes
      ✔️ ACLs successfully created in the Kafka instance "my-kafka-instance"

      In this example, the permissions you create allow applications to use the service account to create topics in the Kafka instance, to produce and consume messages in any topic in the instance, and to use any consumer group.

    2. If you connected to a Service Registry instance, use Role-Based Access Control (RBAC) to enable the new service account to access the Service Registry instance and the artifacts (such as schemas) that it contains.

      Setting Service Registry access permissions for the service account
      rhoas service-registry role add --role=manager --service-account <client_id>
      Updating role for principal
      Role was successfully applied

      In this example, the manager role that you assign to the service account allows applications to use the service account to view and write to schemas in the Service Registry instance.

  12. Use the OpenShift CLI to verify that the RHOAS Operator successfully created the KafkaConnection or ServiceRegistryConnection object, as shown in the following example:

    Using the OpenShift CLI to verify Operator connection to your cluster
    $ oc get KafkaConnection
    
    NAME   		         AGE
    my-kafka-instance    2m35s

    As indicated by this output, when you use the rhoas cluster connect command, the RHOAS Operator creates a KafkaConnection or ServiceRegistryConnection object that matches the name of your Kafka or Service Registry instance. In the preceding example, the object name matches a Kafka instance called my-kafka-instance.

  13. Repeat the preceding steps to ensure that both your Streams for Apache Kafka and Service Registry instances are connected to your OpenShift cluster.

Binding a Quarkus application to OpenShift Streams for Apache Kafka and OpenShift Service Registry using the RHOAS CLI

When the RHOAS Operator and Service Binding Operator are installed on your OpenShift cluster and you’ve connected a Kafka and Service Registry instance to the cluster, you’re ready to deploy an application and perform service binding. Service binding means instructing the Service Binding Operator to automatically inject the application with the parameters required to connect to the Kafka and Service Registry instances.

The following tutorial shows how to use the RHOAS CLI to perform service binding. In the tutorial, you create an example Quarkus application and connect this to a Kafka and Service Registry instance. Quarkus is a Kubernetes-native Java framework that is optimized for serverless, cloud, and Kubernetes environments. The Quarkus application in the tutorial uses an Apache Avro schema to serialize and deserialize messages. The application automatically publishes the Avro schema that it uses to your Service Registry instance.

When you perform service binding, the Service Binding Operator automatically injects connection parameters as files into the pod for the application. The Quarkus application uses service binding extensions for Kafka, Service Registry, and the Service Binding Operator. These extensions enable the application to automatically detect and use the injected connection parameters, eliminating the need for manual configuration of the application.

In general, this automatic injection and detection of connection parameters eliminates the need to manually configure an application to connect to a Kafka or Service Registry instance. This is a particular advantage if you have many applications in your project that you want to connect to these cloud services.

Prerequisites

Deploying an example Quarkus application on OpenShift

In this step of the tutorial, you deploy an example Quarkus application in the OpenShift project that you previously connected your Kafka and Service Registry instances to.

The Quarkus application generates random movie names and produces those names to a Kafka topic. Another part of the application consumes the names from the Kafka topic. Finally, the application uses server-sent events to expose the numbers as a REST UI. A web page in the application displays the exposed names.

The example Quarkus application uses service binding extensions for Kafka, Service Registry, and the Service Binding Operator. These extensions enable the application to automatically detect and use the injected connection parameters, eliminating the need for manual configuration of the application.

Prerequisites
  • You have privileges to deploy applications in the OpenShift project that you connected your Kafka and Service Registry instances to.

Procedure
  1. If you’re not already logged in to the OpenShift CLI, log in using a token, as described in Verifying connection to your OpenShift cluster. Log in as the same user who verified connection to the cluster.

  2. Use the OpenShift CLI to ensure that the current OpenShift project is the one that you previously connected your Kafka and Service Registry instances to, as shown in the following example.

    Using the OpenShift CLI to specify the current OpenShift project
    $ oc project my-project
  3. To create the Quarkus application, deploy a container image provided by Application Services.

    Deploying an example Quarkus application
    $ oc new-app quay.io/rhoas/kafka-avro-schema-quickstart
    
    imagestream.image.openshift.io "kafka-avro-schema-quickstart" created
    deploymentconfig.apps.openshift.io "kafka-avro-schema-quickstart" created
    service "kafka-avro-schema-quickstart" created

    As shown in the output, when you deploy the application, OpenShift creates a service for the application. However, the service is not exposed by default. You must expose the service to create a route for clients to access the application.

  4. Expose the previously created service to create a route to the application.

    Creating a route to the Quarkus application
    $ oc expose svc/kafka-avro-schema-quickstart
    
    route.route.openshift.io/kafka-avro-schema-quickstart exposed
  5. Get the URL of the route created for the application. An example is shown below.

    Getting the route details for the Quarkus application
    $ oc get route
    
    NAME                            HOST/PORT
    kafka-avro-schema-quickstart    kafka-avro-schema-quickstart-my-project.apps.sandbox-m2.ll9k.p1.openshiftapps.com
  6. On the command line, highlight the URL shown under HOST/PORT. Right-click and select Copy.

  7. In your web browser, paste the URL for the route. Ensure that the URL includes http://.

    A web page for the Quarkus application opens.

  8. In your web browser, append /movies.html to the URL.

    A new web page entitled Last movie opens. Because you haven’t yet connected the Quarkus application to your Kafka instance, the name value appears as N/A.

Creating the movies topic in your Kafka instance

In the previous step of this tutorial, you deployed an example application on OpenShift. The application is a Quarkus application that uses a Kafka topic called movies to produce and consume messages. In this step, you create the movies topic in your Kafka instance.

Prerequisites
Procedure
  1. On the Kafka Instances page of the Streams for Apache Kafka web console, click the name of the Kafka instance that you want to add a topic to.

  2. Select the Topics tab, click Create topic, and follow the guided steps to define the details of the movies topic. Click Next to complete each step and click Finish to complete the setup.

    Image of wizard to create movies topic
    Figure 1. Guided steps to define topic
    Topic name

    Enter movies as the topic name.

    Partitions

    Set the number of partitions for this topic. For this tutorial, set a value of 1. Partitions are distinct lists of messages within a topic and enable parts of a topic to be distributed over multiple brokers in the cluster. A topic can contain one or more partitions, enabling producer and consumer loads to be scaled.

    Note
    You can increase the number of partitions later, but you cannot decrease them.
    Message retention

    Set the message retention time to the relevant value and increment. For this tutorial, set a value of A week. Message retention time is the amount of time that messages are retained in a topic before they are deleted or compacted, depending on the cleanup policy.

    Replicas

    For this release of Streams for Apache Kafka, the replicas are preconfigured. The number of partition replicas for the topic is set to 3 and the minimum number of follower replicas that must be in sync with a partition leader is set to 2. Replicas are copies of partitions in a topic. Partition replicas are distributed over multiple brokers in the cluster to ensure topic availability if a broker fails. When a follower replica is in sync with a partition leader, the follower replica can become the new partition leader if needed.

    After you complete the topic setup, the new Kafka topic is listed in the topics table.

Binding the Quarkus application to your Kafka and Service Registry instances using the RHOAS CLI

In this step of the tutorial, you use the RHOAS CLI to bind the example Quarkus application that you deployed on OpenShift to your Kafka and Service Registry instances. When you perform this binding, the Service Binding Operator injects connection parameters as files into the pod for the application. The Quarkus application automatically detects and uses the connection parameters to bind to the Kafka and Service Registry instances.

Prerequisites
Procedure
  1. If you’re not already logged in to the OpenShift CLI, log in using a token, as described in Verifying connection to your OpenShift cluster. Log in as the same user who verified connection to the cluster.

  2. Log in to the RHOAS CLI.

    Logging in to the RHOAS CLI
    $ rhoas login
  3. Use the OpenShift CLI to ensure that the current OpenShift project is the one that you previously connected your Kafka and Service Registry instances to, as shown in the following example.

    Using the OpenShift CLI to specify the current OpenShift project
    $ oc project my-project
  4. Use the RHOAS CLI to instruct the Service Binding Operator to bind a Kafka or Service Registry instance to an application in your OpenShift project.

    Using the RHOAS CLI to bind a cloud services instance to an application on OpenShift
    $ rhoas cluster bind

    You’re prompted to specify the cloud service that you want to bind to your OpenShift application.

    Important
    Steps 5-8 that follow show how to bind a Kafka instance to the Quarkus application. Later in the procedure, you’re instructed to repeat the steps for your Service Registry instance.
  5. Use the up and down arrows on your keyboard to highlight kafka. Press Enter.

    You’re prompted to specify the Kafka instance that you want to bind to an application in your OpenShift project.

  6. If you have more than one Kafka instance, use the up and down arrows on your keyboard to highlight the instance that you want to bind to an application in OpenShift. Press Enter.

    You’re prompted to specify the application that you want to bind your Kafka instance to.

  7. If you have more than one application in your OpenShift project, use the up and down arrows on your keyboard to highlight the kafka-avro-schema-quickstart example application. Press Enter.

  8. Type y to confirm that you want to continue. Press Enter.

    When binding is complete, you should see output like the following:

    Example output from binding a Kafka instance to an application in OpenShift
    Using Service Binding Operator to perform binding
    Binding my-kafka-instance with kafka-avro-schema-quickstart app succeeded

    The output shows that the RHOAS CLI successfully instructed the Service Binding Operator to bind a Kafka instance called my-kafka-instance to the example Quarkus application called kafka-avro-schema-quickstart. The Quarkus application automatically detected the connection parameters injected by the Service Binding Operator and used them to bind with the Kafka instance.

  9. Repeat steps 5-8 of this procedure to bind your Service Registry instance to the Quarkus application. This time, when you’re prompted to specify the cloud service that you want to connect to OpenShift, use the up and down arrows on your keyboard to highlight service-registry.

    When service binding is complete, OpenShift redeploys the Quarkus application. When the application is running again, it starts to use the movies topic that you created in your Kafka instance. One part of the Quarkus application publishes movie name updates to this topic, while another part of the application consumes the updates.

  10. To verify that the Quarkus application is using the Kafka topic, reopen the Last movie web page that you opened earlier in this tutorial.

    On the Last movie web page, observe that the movie name is continuously updated. The updates show that the Quarkus application is now using the movies topic in your Kafka instance to produce and consume messages.