The purpose of this project is to help facilitate Kafka functionality for Capabilities. Whether this be creation of Topics or Service accounts, Kafka-janitor will take care of those actions on behalf of services like Capability Service.
Confluent Cloud is a bit of a mess in terms of accessing it's functionality, some functionality is only available using their proprietary binary tool ccloud (which doesn't support outputting in a reasonable data schema), with their web interface(which uses a unsupported HTTP API with JSON as its data schema) and confluent-kafka-dotnet SDK partially supporting some of the functionality.
Kafka janitor knows what relations we desire between Kafka topics, access control lists, service accounts and key secrets. It also understand the domain language used by the rest of our self service platform.
Tika wraps the ccloud command line interface and exposes a REST HTTP interface that the Kafka janitor uses to instruct Confluent Cloud about what artifacts to create or delete.
ccloud CLI is a command lind tool that interacts with Confluent cloud. It can do more interactions with Confluent Cloud than the current SDK
Confluent Cloud a managed Apache Kafka cluster
+-----------------+
| Kafka janitor |
+-----------------+
|
v
+-----------------+
| Tika |
+-----------------+
|
v
+-----------------+
| ccloud CLI |
+-----------------+
|
v
+-----------------+
| Confluent Cloud |
+-----------------+
The Kafka janitor has the following dependencies:
A restful HTTP API on top of the ccloud cli
Repository
A Tika server docker image can be build by running the command docker build -t tika .
in the tika/server
folder.
Kafka janitor uses AWS parameter store to save the key secrets generated for the users, this dependency can be replaced by an in memory vault by setting the following environment variable KAFKAJANITOR_VAULT="INMEMORY"
To get a local copy up and running follow these simple steps.
Clone the repository
git clone git@github.com:dfds/kafka-janitor.git
Restore dependencies
cd kafka-janitor/src
dotnet restore .
- .NET Core 3.1 SDK a set of libraries and tools that allow developers to create . NET Core applications and libraries.
- curl a command line tool for transferring data with URLs.
- Unix shell a command-line interpreter that provides a command line user interface for Unix-like operating systems.
You can run the project with a hot reloader file watcher by completing the following steps:
Start a local instance of Tika in not connected to ccloud mode:
cd tika/server/
npm install
cd ../local-development
./run-not-connected.sh
Start Kafka janitor
cd kafka-janitor/local-development/
./watch-run.sh
You should now be able to make a get request against the services health endpoint and get a Healthy
response.
curl --request GET \
--url http://localhost:5000/Healthz
- Docker a tool designed to make it easier to create, deploy, and run applications by using containers.
- a local Kubernetes cluster, this could run in: Minikube, Microk8s, Kind or K3S
- kubectl a command line tool for controlling Kubernetes clusters.
- Kustomize a standalone tool to customize Kubernetes objects through a kustomization file.
- curl a command line tool for transferring data with URLs.
- Unix shell a command-line interpreter that provides a command line user interface for Unix-like operating systems.
Build the docker images for kafka-janitor
and tika
:
cd kafka-janitor/
docker build -t ded/kafka-janitor .
cd tika/server/
docker build -t ded/tika .
Make sure your images are in your local clusters registry, otherwise you will get a repository does not exist or may require 'docker login'
error when starting the pods in Kubernetes. Each cluster implementation has its own take on registries:
In Minikube you need to run the command eval $(minikube docker-env)
before building your image.
In Microk8s you need to push the image to the Microk8s registry
In Kind you need to create a local registry and connect it to kind
In K3S you need to create a local registry and connect it to K3s
Point your kubectl
to your local cluster. via the command: kubectl config use-context [your-local-cluster-context]
Deploy the services to your cluster via kubectl:
cd kafka-janitor/local-development/
./deploy-to-local-cluster.sh
You should now be able to access the Kafka janitor running in kubernetes by port forwarding into it, and checking its health:
kubectl -n selfservice port-forward service/kafka-janitor 5000:80
curl --request GET --url http://localhost:5000/Healthz
- Visual studio code a extendable code editor with support for: debugging, version control and much more.
- humao.rest-client a extension that allows you to send HTTP request from Visual studio code.
We have provided some sample interactions for the REST API endpoint in the folder kafka-janitor/local-development/
The interactions ends with .rest
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Distributed under the MIT License.
If the scoped service account is missing for deployment, see https://wiki.dfds.cloud/en/teams/devex/selfservice/Kubernetes-selfservice-deployment-setup