New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bitnami/kafka] Can't configure internal & external access #25580
Comments
Hi @Skull0ne, Let me better understand your issue. The Kafka chart provides 'two' different ways to configure external access:
First of all, I would like to highlight one important detail, when using external access, each Kafka node needs to be individually addressed. So when external access is configured, it can not be domain:9094 for all nodes. It may have been either a different port of each node (domain:9094 / domain:9095 / domain:9096) or a different domain for each or an individual domain for each (node-1.domain:9094 / node-2.domain:9094 / node-3.domain:9094). How NodePort external access with autoDiscovery works, a NodePort service will be created for each Kafka node using Then, during the pod initialization, it will execute You can find this and more information in the Kafka README: https://github.com/bitnami/charts/blob/main/bitnami/kafka/README.md#accessing-kafka-brokers-from-outside-the-cluster For example: #values.yaml
rbac:
create: true
serviceAccount:
create: true
controller:
automountServiceAccountToken: true
externalAccess:
enabled: true
autoDiscovery:
enabled: true
controller:
service:
type: NodePort
domain: "my-domain.com"
nodePorts:
- 30000
- 30001
- 30002 # Advertised listeners on each node:
kafka-controller-2 1/1 Running 0 3m22s
$ kubectl exec -it kafka-controller-0 cat /opt/bitnami/kafka/config/server.properties | grep advertised
advertised.listeners=CLIENT://kafka-controller-0.kafka-controller-headless.default.svc.cluster.local:9092,INTERNAL://kafka-controller-0.kafka-controller-headless.default.svc.cluster.local:9094,EXTERNAL://my-domain.com:30000
$ kubectl exec -it kafka-controller-1 cat /opt/bitnami/kafka/config/server.properties | grep advertised
advertised.listeners=CLIENT://kafka-controller-1.kafka-controller-headless.default.svc.cluster.local:9092,INTERNAL://kafka-controller-1.kafka-controller-headless.default.svc.cluster.local:9094,EXTERNAL://my-domain.com:30001
$ kubectl exec -it kafka-controller-2 cat /opt/bitnami/kafka/config/server.properties | grep advertised
advertised.listeners=CLIENT://kafka-controller-2.kafka-controller-headless.default.svc.cluster.local:9092,INTERNAL://kafka-controller-2.kafka-controller-headless.default.svc.cluster.local:9094,EXTERNAL://my-domain.com:30002 As you can see, each node was configured to expose at |
Hi @migruiz4 and thanks for your answer.
I wasn't aware of this.
Due to my LB setup the best option would be 3 domains :
Does it make sense to you? My issue with the NodePort setup is that I can't know which host will have run a kafka, so I have to add each node with each port and it generates a lot of noise as most of them don't expose the ports. Regards |
Name and Version
registry-1.docker.io/bitnamicharts/kafka 28.2.1
What architecture are you using?
amd64
What steps will reproduce the bug?
Hello,
So we want to deploy Kafka in a Kubernetes cluster, apps running on the same cluster will connect to it but we also want to access it from another location.
I have a set of Loadbalancers in front of my Kubernetes cluster and I'm exposing port 9094 on it. My pool on the LB is targeting all my nodes on ports 30000 30001 30002.
If I set
externalAccess.controller.service.domain
toMY_DOMAIN:9094
, a test with kafkacat will show that the brokers have the URLsMY_DOMAIN:9094:30000
,MY_DOMAIN:9094:30001
,MY_DOMAIN:9094:30002
(30000-30002 are the ports exposed on the nodes on private network).If I override the
listeners.advertisedListeners
toCLIENT://advertised-address-placeholder:9092,INTERNAL://advertised-address-placeholder:9094,EXTERNAL://MY_DOMAIN.com:9094
it works from external, but by overriding this var, the SED command in the init script to replace advertised-address-placeholder by the POD_NAME.SERVICE is removed from the script and thus internal communication doesn't work as "advertised-address-placeholder" is not a valid host.I tried to set it to
CLIENT://kafka.kafka:9092,INTERNAL://kafka-controller-headless.kafka:9094,EXTERNAL://MY_DOMAIN:9094
but I have this error:If I don't override listeners.advertisedListeners, the EXTERNAL is not added to the server.properties configmap in the
advertised.listeners
parameter. Without it andexternalAccess.controller.service.domain
, when I reach Kafka I then get an unknown host error as it returns a cluster.svc.local url.Are you using any custom parameters or values?
What is the expected behavior?
The replace function should stay as each pod needs to set its pod name in its config file:
If
Values.listeners.advertisedListeners
is overridden and advertised-address-placeholder is not present, nothing will happen so it should be safe to let it.The text was updated successfully, but these errors were encountered: