Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kube-dns - cannot use short dns entry #28210

Closed
ajtrichards opened this issue Jun 29, 2016 · 1 comment
Closed

kube-dns - cannot use short dns entry #28210

ajtrichards opened this issue Jun 29, 2016 · 1 comment

Comments

@ajtrichards
Copy link

ajtrichards commented Jun 29, 2016

We've come across an issue where we have 2 services running; auth-api and rabbit-mq. From the auth-ui pod we are trying to get it to find the rabbit-mq pod so it can read from the queues.

When we use a short DNS name: rabbitmq-master, I get the following error;

kubectl exec auth-api-jf2ec -i nslookup rabbitmq-master
nslookup: can't resolve '(null)': Name does not resolve

nslookup: can't resolve 'rabbitmq-master': Try again
error: error executing remote command: Error executing command in container: Error executing in Docker Container: 1

If I use the full DNS name: rabbitmq-master.default.svc.cluster.local, it works OK:

kubectl exec auth-api-jf2ec -i nslookup rabbitmq-master.default.svc.cluster.local

Name:      rabbitmq-master.default.svc.cluster.local
Address 1: 10.0.61.158 ip-10-0-61-158.eu-west-1.compute.internal
nslookup: can't resolve '(null)': Name does not resolve

So we could just use the full DNS but this would then mean we need to change our deployment scripts for each customer namespace we want to use.

I've checked our cluster and the kube-dns pod is up and running.

$ k get --all-namespaces pods
NAMESPACE     NAME                                                              READY     STATUS    RESTARTS   AGE
default       auth-api-jf2ec                                                    1/1       Running   0          15h
default       rabbitmq-master-6yu3o                                             1/1       Running   0          15h
kube-system   elasticsearch-logging-v1-o24ye                                    1/1       Running   0          6d
kube-system   elasticsearch-logging-v1-vlvw0                                    1/1       Running   1          6d
kube-system   fluentd-elasticsearch-ip-172-0-0-32.eu-west-1.compute.internal   1/1       Running   1          6d
kube-system   fluentd-elasticsearch-ip-172-0-0-33.eu-west-1.compute.internal   1/1       Running   0          6d
kube-system   fluentd-elasticsearch-ip-172-0-0-34.eu-west-1.compute.internal   1/1       Running   0          6d
kube-system   heapster-v1.0.2-2148290995-zl3wq                                  4/4       Running   0          6d
kube-system   kibana-logging-v1-e3ci3                                           1/1       Running   3          6d
kube-system   kube-dns-v11-ju72c                                                4/4       Running   0          6d
kube-system   kube-proxy-ip-172-0-0-32.eu-west-1.compute.internal              1/1       Running   1          6d
kube-system   kube-proxy-ip-172-0-0-33.eu-west-1.compute.internal              1/1       Running   0          6d
kube-system   kube-proxy-ip-172-0-0-34.eu-west-1.compute.internal              1/1       Running   0          6d
kube-system   kubernetes-dashboard-v1.0.1-tbyn2                                 1/1       Running   1          6d
kube-system   monitoring-influxdb-grafana-v3-gm426                              2/2       Running   0          6d

This is the output of the /etc/resolv.conf file on the auth-api pod:

$ kubectl exec auth-api-jf2ec -i cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local eu-west-1.compute.internal
nameserver 10.0.0.10
options nods:5

Have I configured something incorrectly / not configured anything at all?

@ajtrichards
Copy link
Author

There was one key piece of information that I missed off this issue... we were using alpine:3.3 as our base image and that doesn't support the search directive in the /etc/resolv.conf.

After upgrading to alpine:3.4 the issue is resolved.

Hopefully this will be of use to someone.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant