New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
error: You must be logged in to the server - the server has asked for the client to provide credentials - "kubectl logs" command gives error #63128
Comments
/sig cli |
I have no idea about this issue. But I have a suggestion: |
Hello @CaoShuFeng We have also tried using kubectl 1.10 but no change.. |
Issue has been solved :) |
/close |
same issue - how about telling us how you solved it. |
Check the kubelet logs, it will tell you the deprecated flags, just remove it and put it into the kubelet config files. It solved my problems :) |
@CaoShuFeng, in one case I've tracked down this issue to an expired |
@ronakpandya7 same issue - how you check your kubelet logs, |
For anyone who hasn't solved this, I've been upgrading our clusters from 1.9 to 1.10, changing kubelet from command line flags to a configuration file. The default Authentication and Authorization to Kubelet's API differs between cli args and config files, so you should make sure to set the "legacy defaults" in the config file to preserve existing behaviour. This is a snippet from my kubelet config that restores the old defaults:
^^ Constructed from: kubernetes/cmd/kubelet/app/options/options.go Lines 279 to 291 in b71966a
This is a relevant issue that lead me to this discovery: #59666 |
when I add following args it works. I use k8s 1.11.0 # for kube-apiserver
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
# for kubelet
--client-ca-file=/etc/kubernetes/pki/ca.crt |
@lenartj Would you mind adding a quick comment how you managed to renew the certificate? |
@pehlert, if your cluster was brought up with kubeadm then remove the expired certificates (.crt) and execute |
@lenartj Only on the master node or all nodes? I tried it but still get the error from above |
@pehlert, first of all, are you sure it's an expired cert issue, have you actually checked that some of them have indeed expired? Have you identified which ones? There are a multitude of setups possible, but in general you'd only need to do this on the master node(s). The kubelets can renew their own certs via the master. Also, have you restarted the appropriate Kubernetes components? For example, if the apiserver.crt expired and now you have renewed it, you need to restart the apiserver; it won't pick up the new cert automatically. The exact method to restart the component depends on your setup: it could deleting a static pod (it will be respawned), a pod created by a daemonset, a service started from systemd/upstart/... If these suggestions do not help I suggest we move this discussion elsewhere to avoid spamming everyone :) |
@lenartj It turned out that deleting the kube-apiserver pod was not enough to restart the apiserver for some reason. Although it had been deleted and recreated successfully, the apiserver process / docker container remained untouched, so that it hadn't picked up the new certificates, yet. Using |
@mgxian you wrote "the apiserver-kubelet-client.crt must have the right permission (group like system:masters)" above. If i use a cert with system:masters everything works fine. Can anybody explain me how to know which role would be the best to use? Which one is kublet actually checking for? system:mastes is a bit to much access or? Thanks, |
@mmack I deploy a cluster use kubeadm just now and I find that kubeadm give the apiserver-kubelet-client.crt 'system:masters' group so I think the permission might be ok. |
@mmack I find this doc Kubelet authorization, It seems nodes permission is ok, but I not test it, you can try it |
@pehlert Thanks for your sharing, we met the same problem. I renewed the nearly expired certificate apiserver-kubelet-client.crt and delete the static apiserver pod. Then I left the company and began my Lunar New Year Holiday. After that, the old certificate expired silently while the 2019-nCoV sweeping across China. One day in these bad days, some one reported that kubectl log/exec not work. And kubelet log said, certificate has expired or is not yet valid. We checked all the certificate but only found that all the certicates are valid. It keeps disturbing me until I found that the apiserver process never restarted even we deleted the pod. Killing the processs with cmd |
Hi, |
We got the same issue today on our self-hosted cluster and in our case we found that admin.conf and .kube/config files were not matching wrt client-certificate-data and client-key-data keys. Copied and pasted the admin.conf's client-certificate-data and client-key-data to .kube/config and it started working. Didn't understand why they mismatched even though both files were not touched on the day of issue. Hope this helps PS: Whole Cluster is at the latest version 1.18 when the issue surfaced |
thank you ! |
किसान भाई कुछ भी बेचे या खरीदे जैसे पुराना ट्रैक्टर , भैंस, गाय , मशीनें आदि। Visit www.krishifarm.in/front/home/post_info/198 |
In my case I found in the apiserver logs indicating an expired certificate: x509: certificate has expired or is not yet valid |
In our company we experienced the same error in a similar scenario (k8s version 1.17). This solution worked flawlessly. |
@pehlert I'am wondering the apiserver docker container remained untouched after the pod had beed deleted is a bug or feature? Do you have any idea? |
Related to kubernetes/kubernetes#63128 According to the CRC documentation: > The system bundle in each released crc executable expires 30 days after the release. Although unverified, it's likely that the certificates were not being updated automatically giving us an obscure authentication error. Changing the crc version to "latest" resolves the issue. We also needed to make additional changes to address a new error on the latest version related to: https://access.redhat.com/solutions/4661741 Signed-off-by: Javier Romero <rjavier@vmware.com>
after the |
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
We had setup kubernetes 1.10.1 on CoreOS with three nodes.
Setup is successfull
But when i tries to see the logs of any pods kubectl gives the following error:
And also trying to get inside of the pods (using EXEC command of kubectl) gives following error:
What you expected to happen:
Environment:
kubectl version
):uname -a
):Linux node1.example.com 4.13.16-coreos-r2 #1 SMP Wed Dec 6 04:27:34 UTC 2017 x86_64 Intel(R) Xeon(R) CPU L5640 @ 2.27GHz GenuineIntel GNU/Linux
We have also specified "--kubelet-client-certificate" and "--kubelet-client-key" flags into kube-apiserver.yaml files:
So what we are missing here?
Thanks in advance :)
The text was updated successfully, but these errors were encountered: