Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ERROR] Unable to wait for coredns pod + no debug logs? #144

Open
chrissound opened this issue Nov 27, 2019 · 10 comments
Open

[ERROR] Unable to wait for coredns pod + no debug logs? #144

chrissound opened this issue Nov 27, 2019 · 10 comments
Labels
bug Something isn't working

Comments

@chrissound
Copy link

[demo@nixos:~/kubernix]$ KUBERNIX_LOG_LEVEL=debug sudo target/release/kubernix --log-level debug --nodes=2
[⠉  1m] ███████████████████████░░ 26/28 Deploying CoreDNS and waiting to be ready
[   0s] █████████████████████████  9/9 Cleanup done
[ERROR] Unable to wait for coredns pod

Hello, just thought I'd give this a try - but ran into some errors above.

Where are the logs meant to show up?

@issue-label-bot
Copy link

Issue-Label Bot is automatically applying the label bug to this issue, with a confidence of 0.80. Please mark this comment with 👍 or 👎 to give our bot feedback!

Links: app homepage, dashboard and code for this bot.

@issue-label-bot issue-label-bot bot added the bug Something isn't working label Nov 27, 2019
@saschagrunert
Copy link
Owner

Seems related to #140, can you ensure that there is no firewall running on your system? Probably an iptables -F could also help before starting the cluster.

@saschagrunert
Copy link
Owner

Sorry, the close was unintentional.

@siers
Copy link

siers commented Nov 27, 2019

iptables -F fixed it for me. I think I only have firewall.allowedTCPPorts = [ 22 80 8080 22000 65353 ]; in NixOS config and if running the non-containerized version of this project created some rules, then also that.

@saschagrunert
Copy link
Owner

Yeah I’m not completely sure but the iptables rules block the CNI bridge which make the coredns not becoming healthy. 🤔

@siers
Copy link

siers commented Nov 27, 2019

Well for me personally the problem's solved, so I guess this is just for you to be aware then. If you feel like it, you could make a troubleshooting section in the README.

@chrissound
Copy link
Author

Yup seems it's the firewall, iptables -F resolved the issue for me. Though not really an ideal solution.

Thanks for making this project!

Below are some logs from the coredns pod:

[demo@nixos:~]$ kubectl get pods --all-namespaces 
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   coredns-5996f6bb7c-855dn   0/1     Running   0          163m

[demo@nixos:~]$ kubectl logs -n kube-system coredns-5996f6bb7c-855dn
[INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
[INFO] plugin/reload: Running configuration MD5 = 85e038b57e0b532efd30fe2c72ab76b8
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
I1127 16:30:37.465592       1 trace.go:82] Trace[93298261]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2019-11-27 16:30:07.457718408 +0000 UTC m=+0.029701490) (total time: 30.007629316s):
Trace[93298261]: [30.007629316s] [30.007629316s] END
I1127 16:30:37.465673       1 trace.go:82] Trace[240875692]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2019-11-27 16:30:07.460928207 +0000 UTC m=+0.032911310) (total time: 30.004453356s):
Trace[240875692]: [30.004453356s] [30.004453356s] END
E1127 16:30:37.465691       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.10.1.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.10.1.1:443: i/o timeout
E1127 16:30:37.465691       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.10.1.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.10.1.1:443: i/o timeout
E1127 16:30:37.465691       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.10.1.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.10.1.1:443: i/o timeout
I1127 16:30:37.465632       1 trace.go:82] Trace[322171214]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2019-11-27 16:30:07.45713268 +0000 UTC m=+0.029115752) (total time: 30.008220006s):
Trace[322171214]: [30.008220006s] [30.008220006s] END
E1127 16:30:37.465740       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.10.1.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.10.1.1:443: i/o timeout
E1127 16:30:37.465740       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.10.1.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.10.1.1:443: i/o timeout
E1127 16:30:37.465740       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.10.1.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.10.1.1:443: i/o timeout
E1127 16:30:37.465753       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.10.1.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.10.1.1:443: i/o timeout
E1127 16:30:37.465753       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.10.1.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.10.1.1:443: i/o timeout
E1127 16:30:37.465753       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.10.1.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.10.1.1:443: i/o timeout
E1127 16:30:37.465691       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.10.1.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.10.1.1:443: i/o timeout
E1127 16:30:37.465740       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.10.1.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.10.1.1:443: i/o timeout
E1127 16:30:37.465753       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.10.1.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.10.1.1:443: i/o timeout

@saschagrunert
Copy link
Owner

Hm, I’ll add a note to the README about that if I can’t find a better way around the issue. Thanks for the reports and trying it out. :)

@PanAeon
Copy link
Contributor

PanAeon commented May 24, 2020

hmm, I also run into this error, changing network to 10.11.0.0/16 resolved the issue.

@saschagrunert
Copy link
Owner

We may give another default network a try.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants