Skip to content

ibm-cloud-architecture/eda-lab-mq-to-kafka

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Lab to MQ to Kafka

A hands-on lab to demonstrate an end-to-end integration between a web application using JMS to MQ and then Kafka. The application is sending sold item data from different stores to MQ queue, which is a source for MQ Kafka connector to write the item sold to the items kafka topic.

In this readme we present local deployment to your workstation deployments with Kafka Confluent or Kafka Strimzi and the deployment of Kafka Confluent on OpenShift.

For IBM Event Streams and IBM MQ with Cloud Pak for Integration, we have different labs described in the EDA use cases

Audience

  • Developers and architects.

What you will learn

  • Lab 1: Run Confluent and IBM MQ locally and test the integration between MQ queues and Kafka topics using the Confluent Kafka MQ connectors.
  • Lab 2: Deploy the connector scenario to an OpenShift cluster with Confluent Platform and IBM MQ already deployed.

Lab 1 - Run locally with docker-compose

Pre-requisites

You will need the following:

Scenario walk through

This lab scenario utilizes the officially supported IBM MQ connectors from Confluent, IBM MQ Source Connector and IBM MQ Sink Connector. Both of these connectors require the IBM MQ client jar (com.ibm.mq.allclient.jar) to be downloaded separately and included with any runtime deployments. This is covered below.

  1. Clone this repository:

    git clone https://github.com/ibm-cloud-architecture/eda-lab-mq-to-kafka.git
    cd eda-lab-mq-to-kafka/confluent
  2. Download the confluentinc-kafka-connect-ibmmq-11.0.8.zip file from https://www.confluent.io/hub/confluentinc/kafka-connect-ibmmq and copy the expanded contents (the entire confluentinc-kafka-connect-ibmmq-11.0.8 folder) to ./data/connect-jars:

    # Verify latest version of Confluent MQ Connector
    curl -s -L https://www.confluent.io/hub/confluentinc/kafka-connect-ibmmq | \
      grep --only-matching  "confluent-hub install confluentinc/kafka-connect-ibmmq\:[0-9]*\.[0-9]*\.[0-9]*" | \
      sed "s/confluent-hub install confluentinc\/kafka-connect-ibmmq\://g"
    
    # Latest version at the time of this writing was 11.0.8
    
    # Manually download the file from https://www.confluent.io/hub/confluentinc/kafka-connect-ibmmq
    unzip ~/Downloads/confluentinc-kafka-connect-ibmmq-11.0.8 -d ./data/connect-jars/
  3. Download the required IBM MQ client jars:

    curl -s https://repo1.maven.org/maven2/com/ibm/mq/com.ibm.mq.allclient/9.2.2.0/com.ibm.mq.allclient-9.2.2.0.jar -o com.ibm.mq.allclient-9.2.2.0.jar
    cp com.ibm.mq.allclient-9.2.2.0.jar data/connect-jars/confluentinc-kafka-connect-ibmmq-11.0.8/lib/.
  4. Start the containers locally by launching the docker-compose stack:

    docker-compose up -d
  5. Wait for the MQ Queue Manager to successfully start:

    docker logs -f ibmmq
    # Wait for the following lines:
    #   xyzZ Started web server
    #   xyzZ AMQ5041I: The queue manager task 'AUTOCONFIG' has ended. [CommentInsert1(AUTOCONFIG)]
  6. Access the Store Simulator web application via http://localhost:8080/#/simulator.

    1. Under the Simulator tab, select IBMMQ as the backend, any number of events you wish to simulate, and click the Simulate button.
  7. Access the IBM MQ Console via https://localhost:9443.

    1. Login using the default admin credentials of admin/passw0rd and accepting any security warnings for self-signed certificate usage.
    2. Navigate to QM1 management screen via the Manage QM1 tile.
    3. Click on the DEV.QUEUE.1 queue to view the simulated messages from the Store Simulator.
  8. Configure the Kafka Connector instance via the Kafka Connect REST API

    curl -i -X PUT -H  "Content-Type:application/json" \
        http://localhost:8083/connectors/eda-store-source/config \
        -d @kustomize/environment/kconnect/config/mq-confluent-source.json

    You should receive a response similar to the following:

    HTTP/1.1 201 Created
    Date: Tue, 13 Apr 2021 18:16:50 GMT
    Location: http://localhost:8083/connectors/eda-store-source
    Content-Type: application/json
    Content-Length: 634
    Server: Jetty(9.4.24.v20191120)
    
    {"name":"eda-store-source","config":{"connector.class":"io.confluent.connect.ibm.mq.IbmMQSourceConnector","tasks.max":"1","key.converter":"org.apache.kafka.connect.storage.StringConverter","value.converter":"org.apache.kafka.connect.json.JsonConverter","mq.hostname":"ibmmq","mq.port":"1414","mq.transport.type":"client","mq.queue.manager":"QM1","mq.channel":"DEV.APP.SVRCONN","mq.username":"app","mq.password":"adummypasswordusedlocally","jms.destination.name":"DEV.QUEUE.1","jms.destination.type":"QUEUE","kafka.topic":"items","confluent.topic.bootstrap.servers":"broker:29092","name":"eda-store-source"},"tasks":[],"type":"source"}

    For more details on the Kafka Connect REST API, you can visit the Confluent Docs. This step can also be performed via the Confluent Control Center UI.

  9. Access Confluent Control Center via http://localhost:9021. (NOTE: This component sleeps for two minutes upon initial startup.)

    1. Click on your active cluster
    2. Click on Connect in the left-nav menu, then connect in the Connect Cluster list.
    3. You should see your Running eda-store-source connector.
    4. Click on Topics in the left-nav menu and select items in the Topics list.
    5. Click on the Messages tab and enter 0 in the Offset textbox.
    6. You should see all the messages that were previously in your DEV.QUEUE.1 queue now in your items topic and they are no longer in the MQ queue!
  10. To stop the environment once you are complete:

docker-compose down

Lab 2 - Run on OpenShift Container Platform

Lab contents:

  1. Pre-requisites
  2. Scenario walkthrough
  3. Deploy MQ queue manager with remote access enabled
  4. Deploy Store Simulator application
  5. Create custom Kafka Connect container images
  6. Update Confluent Platform container deployments
  7. Configure MQ Connector
  8. Verify end-to-end connectivity
  9. Lab complete!

Pre-requisites

You need the following:

  • git
  • jq
  • OpenShift oc CLI
  • openssl & keytool - Installed as part of your Linux/Mac OS X-based operating system and Java JVM respectively.
  • Confluent Platform (Kafka cluster) deployed on Red Hat OpenShift via Confluent Operator
  • IBM MQ Operator on Red Hat OpenShift

Scenario walk through

  1. Clone this repository. All subsequent commands are run from the root directory of the cloned repository.

    git clone https://github.com/ibm-cloud-architecture/eda-lab-mq-to-kafka.git
    cd eda-lab-mq-to-kafka
  2. The lab setup can be run with any number of projects between the three logical components below. Update the environment variables below with your respective projects for each component and the instructions listed below will always reference the correct project to run the commands against. All three of these values below can be the same value if all components are installed into a single project.

    export PROJECT_CONFLUENT_PLATFORM=my-confluent-platform-project
    export PROJECT_MQ=my-ibm-mq-project
    export PROJECT_STORE_SIMULATOR=my-store-simulator-project

    NOTE: If any of the above projects are not created in your OpenShift cluster, you will need to create them via the oc new-project PROJECT_NAME command.

Deploy MQ queue manager with remote access enabled

IBM MQ queue managers that are exposed by a Route on OpenShift require TLS-enabled security, so we will first create a SSL certificate pair and truststore for both queue manager and client use, respectively.

  1. Create TLS certificate and key for use by the MQ QueueManager custom resource:

    openssl req -newkey rsa:2048 -nodes  -subj "/CN=localhost" -x509 -days 3650 \
                -keyout  ./kustomize/environment/mq/base/certificates/tls.key   \
                -out ./kustomize/environment/mq/base/certificates/tls.crt
  2. Create TLS client truststore for use by the Store Simulator and Kafka Connect applications:

    keytool -import -keystore ./kustomize/environment/mq/base/certificates/mq-tls.jks \
            -file ./kustomize/environment/mq/base/certificates/tls.crt                \
            -storepass my-mq-password -noprompt -keyalg RSA -storetype JKS
  3. Create the OpenShift resources by applying the Kustomize YAMLs:

    oc project ${PROJECT_MQ}
    oc apply -k ./kustomize/environment/mq -n ${PROJECT_MQ}

    REFERENCE MATERIAL: Create a TLS-secured queue manager via Example: Configuring TLS

Deploy Store Simulator application

  1. Update the store-simulator ConfigMap YAML to point to the specific MQ queue manager's Route:

    export MQ_ROUTE_HOST=$(oc get route store-simulator-mq-ibm-mq-qm -o jsonpath="{.spec.host}" -n ${PROJECT_MQ})
    cat ./kustomize/apps/store-simulator/base/configmap.yaml | envsubst | \
           tee ./kustomize/apps/store-simulator/base/configmap.yaml >/dev/null
    cat ./kustomize/apps/store-simulator/base/configmap-mq-ccdt.yaml | envsubst | \
           tee ./kustomize/apps/store-simulator/base/configmap-mq-ccdt.yaml >/dev/null
  2. The Kafka Connect instance acts as an MQ client and requires the necessary truststore information for secure connecitivity. Copy the truststore secret that was generated by the Store Simulator component deployment to the local Confluent project for re-use by the Connector:

    oc get secret -n ${PROJECT_MQ} -o json store-simulator-mq-truststore | \
       jq -r ".metadata.namespace=\"${PROJECT_STORE_SIMULATOR}\"" | \
       oc apply -n ${PROJECT_STORE_SIMULATOR} -f -

    NOTE: This step is only required if you are running MQ in a different project than the Store Simulator application.

  3. Apply Kustomize YAMLs:

    oc project ${PROJECT_STORE_SIMULATOR}
    oc apply -k ./kustomize/apps/store-simulator -n ${PROJECT_STORE_SIMULATOR}
  4. Send messages to MQ via the store simulator application:

    1. The store simulator user interface is exposed as a Route on OpenShift:
      oc get route store-simulator -o jsonpath="{.spec.host}" -n ${PROJECT_STORE_SIMULATOR}
    2. Access this Route via HTTP in your browser.
    3. Go to the SIMULATOR tab.
    4. Select the IBMMQ radio button and use the slider to select the number of messages to send.
    5. Click the Simulate button and wait for the Messages Sent window to be populated.
  5. Validate messages received in MQ Web Console:

    1. The MQ Web Console is exposed as a Route on OpenShift:
      oc get route store-simulator-mq-ibm-mq-web -o jsonpath="{.spec.host}" -n ${PROJECT_MQ}
    2. Go to this route via HTTPS in your browser and login.
    3. If you need to determine your Default authentication admin password, it can be retrieved via the following command:
      oc get secret -n {CP4I installation project} ibm-iam-bindinfo-platform-auth-idp-credentials -o json | jq -r .data.admin_password | base64 -d -
      
    4. Click the QM1 tile.
    5. Click the DEV.QUEUE.1 queue.
    6. Verify that the queue depth is equal to the number of messages sent from the store application.

Create custom Kafka Connect container images

  1. Apply the Kafka Connect components from the Kustomize YAMLs:

    oc project ${PROJECT_CONFLUENT_PLATFORM}
    oc apply -k ./kustomize/environment/kconnect/ -n ${PROJECT_CONFLUENT_PLATFORM}
    oc logs -f buildconfig/confluent-connect-mq -n ${PROJECT_CONFLUENT_PLATFORM}

    This creates two ImageStreamTags that are based on the official Confluent Platform container images, which can now be referenced locally in the cluster by the Connect Cluster pods. We then create a BuildConfig to create a custom build that provides a container image with the required Confluent Platform MQ Connector binaries pre-installed, which in turn creates an additional ImageStreamTag that allows us to update the Connect Cluster pods to use the new images.

  2. The Kafka Connect instance acts as an MQ client and requires the necessary truststore information for secure connectivity. Copy the truststore secret that was generated by the Store Simulator component deployment to the local Confluent project for re-use by the Connector:

    oc get secret -n ${PROJECT_MQ} -o json store-simulator-mq-truststore | \
       jq -r ".metadata.namespace=\"${PROJECT_CONFLUENT_PLATFORM}\"" | \
       oc apply -n ${PROJECT_CONFLUENT_PLATFORM} -f -

    NOTE: This step is only required if you are running MQ in a different project than the Confluent Platform.

  3. Next, we need to patch the ConfigMap the Connectors pod uses to inject JVM configuration parameters (jvm.config) into the Connect runtime. We will do this by patching the PhysicalStatefulCluster that manages the Connect cluster. This is required as we are using a non-IBM JVM inside the Confluent-provided Connect images and the default SSL Cipher Suite Mappings used by default are incompatible. By adding the -Dcom.ibm.mq.cfg.useIBMCipherMappings=false JVM configuration parameter, we allow the OpenJDK JVM to leverage the Oracle-compatible Cipher Suite Mappings instead.

    oc get psc/connectors -o yaml -n ${PROJECT_CONFLUENT_PLATFORM} | \
       sed 's/   -Dcom.sun.management.jmxremote.ssl=false/   -Dcom.sun.management.jmxremote.ssl=false\n          -Dcom.ibm.mq.cfg.useIBMCipherMappings=false/' | \
       oc replace -n ${PROJECT_CONFLUENT_PLATFORM} -f -

    REFERENCE: If you encounter CipherSuite issues in the Connector logs, reference TLS CipherSpecs and CipherSuites in IBM MQ classes for JMS from the IBM MQ documentation.

Update Confluent Platform container deployments

This lab assumes that Confluent Platform is deployed via https://github.ibm.com/ben-cornwell/confluent-operator, which utilizes Confluent Operator Quick Start and deploys the Schema Registry, Replicator, Connect, and Control Center components in a single Helm release. This is problematic when following Step 5 of the Deploy Confluent Connectors instructions, as the image registries required cannot be mixed between different components in the same release. Connect requires the internal OpenShift registry for our custom images we just created, while the other components still require the original docker.io registry.

To circumvent this issue, we can manually patch the Kafka Connect PhysicalStatefulCluster custom resource for the Confluent Operator to propogate changes down to the pod level and take advantage of the newly built custom Connect images (as well as the TLS truststore files).

oc patch psc/connectors --type merge --patch "$(cat ./kustomize/environment/kconnect/infra/confluent-connectors-psc-patch.yaml | envsubst)" -n ${PROJECT_CONFLUENT_PLATFORM}

However, if Confluent Platform was deployed via the instructions available at Install Confluent Operator and Confluent Platform and Connect is available as it's own Helm release (ie helm get notes connectors), you can follow Step 5 of the Deploy Confluent Connectors instructions to update the Confluent custom resources via Helm. If this path is taken, you may need to reapply the useIBMCipherMappings patch from the previous section.

A helm upgrade command may look something like the following:

helm upgrade --install connectors \
      --values /your/original/values/file/values-file.yaml \
      --namespace ${PROJECT_CONFLUENT_PLATFORM} \
      --set "connect.enabled=true" \
      --set "connect.mountedSecrets[0].secretRef=store-simulator-mq-truststore" \
      --set "global.provider.registry.fqdn=image-registry.openshift-image-registry.svc:5000" \
      --set "connect.image.repository=${PROJECT_CONFLUENT_PLATFORM}/cp-server-connect-operator" \
      --set "connect.image.tag=6.1.1.0-custom-mq" \
      --set "global.initContainer.image.repository=${PROJECT_CONFLUENT_PLATFORM}/cp-init-container-operator" \
      --set "global.initContainer.image.tag=6.1.1.0" \
      ./confluent-operator-1.7.0/helm/confluent-operator
  1. Log in to Confluent Control Center and navigate to Home > controlcenter.cluster > Connect > connect-default > Add connector and verify that the IbmMqSinkConnector and IbmMQSourceConnector are now available as connector options.

  2. Optionally, you can run the following curl command to verify via the REST API:

    curl --insecure --silent https://$(oc get route connectors-bootstrap -o jsonpath="{.spec.host}" -n ${PROJECT_CONFLUENT_PLATFORM})/connector-plugins | jq .

Configure MQ Connector

  1. Create the target Kafka topic in Confluent Platform:

    1. In the Confluent Control Center and navigate to your Connect cluster via Home > controlcenter.cluster > Topics and click Add a topic.
    2. Enter items.openshift (or your own custom topic name).
    3. Click Create with defaults.
  2. Generate a customized MQ connector configuration file based on your local environment:

    export KAFKA_BOOTSTRAP=$(oc get route kafka-bootstrap -o jsonpath="{.spec.host}" -n ${PROJECT_CONFLUENT_PLATFORM}):443
    
    # Generate the configured Kafka Connect connector configuration file
    cat ./kustomize/environment/kconnect/config/mq-confluent-source-openshift.json | envsubst > ./kustomize/environment/kconnect/config/mq-confluent-source-openshift-configured.json

    NOTE: You will need to manually edit the generated ./kustomize/environment/kconnect/config/mq-confluent-source-openshift-configured.json file if you used a topic name other than items.openshift.

  3. Deploy an MQ Connector instance by choosing one of the two paths:

    1. You can deploy a connector instance via the Confluent Control Center UI:

      1. Log in to the Confluent Control Center and navigate to your Connect cluster via Home > controlcenter.cluster > Connect > connect-default.
      2. Click Upload connector config file and browse to eda-lab-mq-to-kafka/kustomize/environment/kconnect/config/mq-confluent-source-openshift-configured.json
      3. Click Continue.
      4. Click Launch.
    2. You can deploy a connector instance via the Kafka Connect REST API:

      export CONNECTORS_BOOTSTRAP=$(oc get route connectors-bootstrap -o jsonpath="{.spec.host}" -n ${PROJECT_CONFLUENT_PLATFORM})
      curl -i -X PUT -H  "Content-Type:application/json" --insecure \
          https://$CONNECTORS_BOOTSTRAP/connectors/eda-store-source/config \
          -d @kustomize/environment/kconnect/config/mq-confluent-source-openshift-configured.json

      REFERENCE: If you encounter CipherSuite issues in the Connector logs, reference TLS CipherSpecs and CipherSuites in IBM MQ classes for JMS from the IBM MQ documentation.

Verify end-to-end connectivity

  1. Validate records received in Kafka topic in Confluent Platform:

    1. Log in to the Confluent Control Center and navigate to your Connect cluster via Home > controlcenter.cluster > Topics > items.openshift.
    2. Click Messages.
    3. Enter 0 in the offset textbox and hit Enter.
    4. You should see all the messages you sent to the MQ queue now reside in Kafka topics.
  2. Validate MQ queues have been drained via the MQ Web Console:

    1. The MQ Web Console is exposed as a route on OpenShift:
      oc get route store-simulator-mq-ibm-mq-web -o jsonpath="{.spec.host}" -n ${PROJECT_MQ}
    2. Go to this route via HTTPS in your browser and login.
    3. If you need to determine your Default authentication admin password, it can be retrieved via the following command:
      oc get secret -n {CP4I installation project} ibm-iam-bindinfo-platform-auth-idp-credentials -o json | jq -r .data.admin_password | base64 -d -
    4. Click the QM1 tile.
    5. Click the DEV.QUEUE.1 queue.
    6. Verify that the queue depth is zero messages.

Lab complete!

To clean up the resources deployed via the lab scenario:

  1. Resources in the ${PROJECT_STORE_SIMULATOR} can be removed via:
    oc delete -k ./kustomize/apps/store-simulator/ -n ${PROJECT_STORE_SIMULATOR}
  2. Resources in the ${PROJECT_MQ} can be removed via:
    oc delete -k ./kustomize/environment/mq/ -n ${PROJECT_MQ}
  3. Resources in the ${PROJECT_CONFLUENT_PLATFORM} project can be removed, but also require a reset of the Connectors Helm release to the original container images settings:
    • buildconfig/confluent-connect-mq
    • imagestream.image.openshift.io/cp-init-container-operator
    • imagestream.image.openshift.io/cp-server-connect-operator
    • secret/store-simulator-mq-truststore

About

A hands-on lab to send sold item from store to MQ and then to Kafka (Confluent or Strimzi) using MQ Kafka connector

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published