Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to debug hanging "Created API client, waiting for the control plane to become ready" #103

Closed
andersla opened this issue Jan 6, 2017 · 4 comments

Comments

@andersla
Copy link

andersla commented Jan 6, 2017

I wonder if there is a way I can debug and see where in "waiting for the control plane to become ready" Kubeadm init hangs. Any -verbose parameters for kubeadm? Some other log-files or recommended debug options?
I am running Ubuntu 16.04 - but I am trying to run kubeadm from within a docker container. On my host it is working.

@andersla andersla changed the title How to debug [apiclient] Created API client, waiting for the control plane to become ready How to debug "Created API client, waiting for the control plane to become ready" Jan 6, 2017
@andersla andersla changed the title How to debug "Created API client, waiting for the control plane to become ready" How to debug hanging "Created API client, waiting for the control plane to become ready" Jan 6, 2017
@luxas
Copy link
Member

luxas commented Jan 7, 2017

I'm mostly just opening another shell and running docker ps and/or journalctl -xeu kubelet
It's hard to do an all-in-one debugging solution since there are so much information and we're basically waiting for things to happen (for instance, slow internet conn might make it take very long although everything's working)

Do you have a proposal of what should be included?

@andersla
Copy link
Author

andersla commented Jan 9, 2017

Thanks, I managed to get by the "waiting for the control plane to become ready", the standard debug options you suggested was enough.

@heartarea
Copy link

run journalctl -xeu kubelet then see logs
'error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs
systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
systemd[1]: Unit kubelet.service entered failed state.
systemd[1]: kubelet.service failed.'

kubelet's cgroup driver is not same with docker's cgroup driver, so I update systemd -> cgroupfs.

vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
update KUBELET_CGROUP_ARGS=--cgroup-driver=systemd to KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs

restart kubelet
run 'service kubelet restart'

everyting is ok

@vuiseng9
Copy link

vuiseng9 commented Jul 27, 2017

@heartarea Appreciate your steps. They work well.

Also, we need to apply this changes to other nodes other than master so that they are able to join the cluster.

Is this a bug? Why it is not being set as you suggested?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants