Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error: You must be logged in to the server - the server has asked for the client to provide credentials - "kubectl logs" command gives error #63128

Closed
ronakpandya7 opened this issue Apr 25, 2018 · 28 comments
Labels
sig/cli Categorizes an issue or PR as relevant to SIG CLI.

Comments

@ronakpandya7
Copy link

ronakpandya7 commented Apr 25, 2018

Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug

What happened:
We had setup kubernetes 1.10.1 on CoreOS with three nodes.
Setup is successfull

NAME                STATUS    ROLES     AGE       VERSION
node1.example.com   Ready     master    19h       v1.10.1+coreos.0
node2.example.com   Ready     node      19h       v1.10.1+coreos.0
node3.example.com   Ready     node      19h       v1.10.1+coreos.0
NAMESPACE     NAME                                        READY     STATUS    RESTARTS   AGE
default            pod-nginx2-689b9cdffb-qrpjn       1/1       Running   0          16h
kube-system   calico-kube-controllers-568dfff588-zxqjj    1/1       Running   0          18h
kube-system   calico-node-2wwcg                           2/2       Running   0          18h
kube-system   calico-node-78nzn                           2/2       Running   0          18h
kube-system   calico-node-gbvkn                           2/2       Running   0          18h
kube-system   calico-policy-controller-6d568cc5f7-fx6bv   1/1       Running   0          18h
kube-system   kube-apiserver-x66dh                        1/1       Running   4          18h
kube-system   kube-controller-manager-787f887b67-q6gts    1/1       Running   0          18h
kube-system   kube-dns-79ccb5d8df-b9skr                   3/3       Running   0          18h
kube-system   kube-proxy-gb2wj                            1/1       Running   0          18h
kube-system   kube-proxy-qtxgv                            1/1       Running   0          18h
kube-system   kube-proxy-v7wnf                            1/1       Running   0          18h
kube-system   kube-scheduler-68d5b648c-54925              1/1       Running   0          18h
kube-system   pod-checkpointer-vpvg5                      1/1       Running   0          18h

But when i tries to see the logs of any pods kubectl gives the following error:

kubectl logs -f pod-nginx2-689b9cdffb-qrpjn
error: You must be logged in to the server (the server has asked for the client to provide credentials ( pods/log pod-nginx2-689b9cdffb-qrpjn))

And also trying to get inside of the pods (using EXEC command of kubectl) gives following error:

kubectl exec -ti pod-nginx2-689b9cdffb-qrpjn bash
error: unable to upgrade connection: Unauthorized

What you expected to happen:

1. It will display the logs of the pods
2. We can do exec for the pods

Environment:

  • Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.2", GitCommit:"bdaeafa71f6c7c04636251031f93464384d54963", GitTreeState:"clean", BuildDate:"2017-10-24T19:48:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0+coreos.0", GitCommit:"6bb2e725fc2876cd94b3900fc57a1c98ca87a08b", GitTreeState:"clean", BuildDate:"2018-04-02T16:49:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
  • OS (e.g. from /etc/os-release):
NAME="Container Linux by CoreOS"
ID=coreos
VERSION=1576.4.0
VERSION_ID=1576.4.0
BUILD_ID=2017-12-06-0449
PRETTY_NAME="Container Linux by CoreOS 1576.4.0 (Ladybug)"
ANSI_COLOR="38;5;75"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://issues.coreos.com"
COREOS_BOARD="amd64-usr"
  • Kernel (e.g. uname -a):

Linux node1.example.com 4.13.16-coreos-r2 #1 SMP Wed Dec 6 04:27:34 UTC 2017 x86_64 Intel(R) Xeon(R) CPU L5640 @ 2.27GHz GenuineIntel GNU/Linux

  • Install tools:
  1. Kubelet
Description=Kubelet via Hyperkube ACI
[Service]
EnvironmentFile=/etc/kubernetes/kubelet.env
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/run/kubelet-pod.uuid \
  --volume=resolv,kind=host,source=/etc/resolv.conf \
  --mount volume=resolv,target=/etc/resolv.conf \
  --volume var-lib-cni,kind=host,source=/var/lib/cni \
  --mount volume=var-lib-cni,target=/var/lib/cni \
  --volume var-log,kind=host,source=/var/log \
  --mount volume=var-log,target=/var/log"
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests
ExecStartPre=/bin/mkdir -p /var/lib/cni
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
  --kubeconfig=/etc/kubernetes/kubeconfig \
  --config=/etc/kubernetes/config \
  --cni-conf-dir=/etc/kubernetes/cni/net.d \
  --network-plugin=cni \
  --allow-privileged \
  --lock-file=/var/run/lock/kubelet.lock \
  --exit-on-lock-contention \
  --hostname-override=node1.example.com \
  --node-labels=node-role.kubernetes.io/master \
  --register-with-taints=node-role.kubernetes.io/master=:NoSchedule
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
  1. KubeletConfig
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
staticPodPath: "/etc/kubernetes/manifests"
clusterDomain: "cluster.local"
clusterDNS: [ "10.3.0.10" ]
nodeStatusUpdateFrequency: "5s"
clientCAFile: "/etc/kubernetes/ca.crt"

We have also specified "--kubelet-client-certificate" and "--kubelet-client-key" flags into kube-apiserver.yaml files:

- --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt
- --kubelet-client-key=/etc/kubernetes/secrets/apiserver.key

So what we are missing here?
Thanks in advance :)

@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Apr 25, 2018
@shubheksha
Copy link
Contributor

/sig cli

@k8s-ci-robot k8s-ci-robot added sig/cli Categorizes an issue or PR as relevant to SIG CLI. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Apr 25, 2018
@CaoShuFeng
Copy link
Contributor

I have no idea about this issue.

But I have a suggestion:
What about try kubectl 1.10 instead of kubectl 1.8 ?

@ronakpandya7
Copy link
Author

ronakpandya7 commented Apr 25, 2018

Hello @CaoShuFeng

We have also tried using kubectl 1.10 but no change..

@ronakpandya7
Copy link
Author

Issue has been solved :)

@ronakpandya7
Copy link
Author

/close

@andrewd-uriux
Copy link

same issue - how about telling us how you solved it.

@ronakpandya7
Copy link
Author

ronakpandya7 commented Apr 27, 2018

Check the kubelet logs, it will tell you the deprecated flags, just remove it and put it into the kubelet config files.

It solved my problems :)

@lenartj
Copy link

lenartj commented May 13, 2018

@CaoShuFeng, in one case I've tracked down this issue to an expired apiserver-kubelet-client.crt. Renewed the cert, restarted apiserver and it went back to normal.

@xieydd
Copy link

xieydd commented May 15, 2018

@ronakpandya7 same issue - how you check your kubelet logs,systemctl status kubelet or journalctl -u kubelet -f ,but i didn`t get some useful information

@JoelSpeed
Copy link
Contributor

For anyone who hasn't solved this, I've been upgrading our clusters from 1.9 to 1.10, changing kubelet from command line flags to a configuration file.

The default Authentication and Authorization to Kubelet's API differs between cli args and config files, so you should make sure to set the "legacy defaults" in the config file to preserve existing behaviour.

This is a snippet from my kubelet config that restores the old defaults:

# Restore default authentication and authorization modes from K8s < 1.9
authentication:
  anonymous:
    enabled: true # Defaults to false as of 1.10
  webhook:
    enabled: false # Deafults to true as of 1.10
authorization:
  mode: AlwaysAllow # Deafults to webhook as of 1.10
readOnlyPort: 10255 # Used by heapster. Defaults to 0 (disabled) as of 1.10. Needed for metrics.

^^ Constructed from:

// applyLegacyDefaults applies legacy default values to the KubeletConfiguration in order to
// preserve the command line API. This is used to construct the baseline default KubeletConfiguration
// before the first round of flag parsing.
func applyLegacyDefaults(kc *kubeletconfig.KubeletConfiguration) {
// --anonymous-auth
kc.Authentication.Anonymous.Enabled = true
// --authentication-token-webhook
kc.Authentication.Webhook.Enabled = false
// --authorization-mode
kc.Authorization.Mode = kubeletconfig.KubeletAuthorizationModeAlwaysAllow
// --read-only-port
kc.ReadOnlyPort = ports.KubeletReadOnlyPort
}

This is a relevant issue that lead me to this discovery: #59666

@mgxian
Copy link

mgxian commented Jul 11, 2018

when I add following args it works. I use k8s 1.11.0
when get logs , apiserver need talk to kublet in auth
the apiserver-kubelet-client.crt must have the right permission (group like system:masters)

# for kube-apiserver
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key

# for kubelet
--client-ca-file=/etc/kubernetes/pki/ca.crt

@pehlert
Copy link

pehlert commented Sep 2, 2018

@lenartj Would you mind adding a quick comment how you managed to renew the certificate?

@lenartj
Copy link

lenartj commented Sep 2, 2018

@pehlert, if your cluster was brought up with kubeadm then remove the expired certificates (.crt) and execute kubeadm alpha phase certs all

@pehlert
Copy link

pehlert commented Sep 2, 2018

@lenartj Only on the master node or all nodes? I tried it but still get the error from above

@lenartj
Copy link

lenartj commented Sep 2, 2018

@pehlert, first of all, are you sure it's an expired cert issue, have you actually checked that some of them have indeed expired? Have you identified which ones? There are a multitude of setups possible, but in general you'd only need to do this on the master node(s). The kubelets can renew their own certs via the master. Also, have you restarted the appropriate Kubernetes components? For example, if the apiserver.crt expired and now you have renewed it, you need to restart the apiserver; it won't pick up the new cert automatically. The exact method to restart the component depends on your setup: it could deleting a static pod (it will be respawned), a pod created by a daemonset, a service started from systemd/upstart/... If these suggestions do not help I suggest we move this discussion elsewhere to avoid spamming everyone :)

@pehlert
Copy link

pehlert commented Sep 3, 2018

@lenartj It turned out that deleting the kube-apiserver pod was not enough to restart the apiserver for some reason. Although it had been deleted and recreated successfully, the apiserver process / docker container remained untouched, so that it hadn't picked up the new certificates, yet. Using docker stop on the apiserver instance successfully restarted it and authorization was successful afterwards. Thanks for your help.

@mmack
Copy link

mmack commented Oct 19, 2018

@mgxian you wrote "the apiserver-kubelet-client.crt must have the right permission (group like system:masters)" above. If i use a cert with system:masters everything works fine.

Can anybody explain me how to know which role would be the best to use? Which one is kublet actually checking for? system:mastes is a bit to much access or?

Thanks,
Max

@mgxian
Copy link

mgxian commented Oct 19, 2018

@mmack I deploy a cluster use kubeadm just now and I find that kubeadm give the apiserver-kubelet-client.crt 'system:masters' group so I think the permission might be ok.
tim 20181019154452

@mgxian
Copy link

mgxian commented Oct 19, 2018

@mmack I find this doc Kubelet authorization, It seems nodes permission is ok, but I not test it, you can try it

@chansonzhang
Copy link

chansonzhang commented Feb 5, 2020

@lenartj It turned out that deleting the kube-apiserver pod was not enough to restart the apiserver for some reason. Although it had been deleted and recreated successfully, the apiserver process / docker container remained untouched, so that it hadn't picked up the new certificates, yet. Using docker stop on the apiserver instance successfully restarted it and authorization was successful afterwards. Thanks for your help.

@pehlert Thanks for your sharing, we met the same problem. I renewed the nearly expired certificate apiserver-kubelet-client.crt and delete the static apiserver pod. Then I left the company and began my Lunar New Year Holiday. After that, the old certificate expired silently while the 2019-nCoV sweeping across China. One day in these bad days, some one reported that kubectl log/exec not work. And kubelet log said, certificate has expired or is not yet valid. We checked all the certificate but only found that all the certicates are valid. It keeps disturbing me until I found that the apiserver process never restarted even we deleted the pod. Killing the processs with cmd docker stop <container_id> perfectly solved this problem just now! Thank you again!

@samadhiyaaman93
Copy link

Hi,
Thanks for sharing information Find Something That Keeps You Going: Catch Up With the 2019 Graduate Student Research Winner such as useful information.

New Holland 3037

@imabhinav
Copy link

imabhinav commented Aug 25, 2020

We got the same issue today on our self-hosted cluster and in our case we found that admin.conf and .kube/config files were not matching wrt client-certificate-data and client-key-data keys.
Try the below steps:
kubectl get po --kubeconfig=~/.kube/config(not working)
kubectl get po --kubeconfig=/etc/kubernetes/admin.conf (working)

Copied and pasted the admin.conf's client-certificate-data and client-key-data to .kube/config and it started working. Didn't understand why they mismatched even though both files were not touched on the day of issue. Hope this helps

PS: Whole Cluster is at the latest version 1.18 when the issue surfaced

@myonlyzzy
Copy link

@lenartj It turned out that deleting the kube-apiserver pod was not enough to restart the apiserver for some reason. Although it had been deleted and recreated successfully, the apiserver process / docker container remained untouched, so that it hadn't picked up the new certificates, yet. Using docker stop on the apiserver instance successfully restarted it and authorization was successful afterwards. Thanks for your help.

@pehlert Thanks for your sharing, we met the same problem. I renewed the nearly expired certificate apiserver-kubelet-client.crt and delete the static apiserver pod. Then I left the company and began my Lunar New Year Holiday. After that, the old certificate expired silently while the 2019-nCoV sweeping across China. One day in these bad days, some one reported that kubectl log/exec not work. And kubelet log said, certificate has expired or is not yet valid. We checked all the certificate but only found that all the certicates are valid. It keeps disturbing me until I found that the apiserver process never restarted even we deleted the pod. Killing the processs with cmd docker stop <container_id> perfectly solved this problem just now! Thank you again!

thank you !

@gnvik
Copy link

gnvik commented Nov 19, 2020

किसान भाई कुछ भी बेचे या खरीदे जैसे पुराना ट्रैक्टर , भैंस, गाय , मशीनें आदि। Visit www.krishifarm.in/front/home/post_info/198

@wedobetter
Copy link

In my case I found in the apiserver logs indicating an expired certificate: x509: certificate has expired or is not yet valid
All the cluster certificates were automatically renewed, but I had to copy /etc/kubernetes/admin.conf into my ~/.kube/config as a new certificate had been issued, I hope it helps someone

@manuelchichi
Copy link

@lenartj It turned out that deleting the kube-apiserver pod was not enough to restart the apiserver for some reason. Although it had been deleted and recreated successfully, the apiserver process / docker container remained untouched, so that it hadn't picked up the new certificates, yet. Using docker stop on the apiserver instance successfully restarted it and authorization was successful afterwards. Thanks for your help.

@pehlert Thanks for your sharing, we met the same problem. I renewed the nearly expired certificate apiserver-kubelet-client.crt and delete the static apiserver pod. Then I left the company and began my Lunar New Year Holiday. After that, the old certificate expired silently while the 2019-nCoV sweeping across China. One day in these bad days, some one reported that kubectl log/exec not work. And kubelet log said, certificate has expired or is not yet valid. We checked all the certificate but only found that all the certicates are valid. It keeps disturbing me until I found that the apiserver process never restarted even we deleted the pod. Killing the processs with cmd docker stop <container_id> perfectly solved this problem just now! Thank you again!

In our company we experienced the same error in a similar scenario (k8s version 1.17). This solution worked flawlessly.

@chansonzhang
Copy link

@lenartj It turned out that deleting the kube-apiserver pod was not enough to restart the apiserver for some reason. Although it had been deleted and recreated successfully, the apiserver process / docker container remained untouched, so that it hadn't picked up the new certificates, yet. Using docker stop on the apiserver instance successfully restarted it and authorization was successful afterwards. Thanks for your help.

@pehlert I'am wondering the apiserver docker container remained untouched after the pod had beed deleted is a bug or feature? Do you have any idea?

jromero added a commit to buildpacks/ci that referenced this issue Jan 6, 2022
Related to kubernetes/kubernetes#63128

According to the CRC documentation:

> The system bundle in each released crc executable expires 30 days after the release.

Although unverified, it's likely that the certificates were not being updated
automatically giving us an obscure authentication error. Changing the crc version
to "latest" resolves the issue. We also needed to make additional changes to address
a new error on the latest version related to:

https://access.redhat.com/solutions/4661741

Signed-off-by: Javier Romero <rjavier@vmware.com>
@stevenkitter
Copy link

after the kubeadm certs renew all you need restart some kube-admin pods (apiserver,controller...) in the master, copy the config file to home .kube dir,and you will find the notice like the server has asked for the client to provide credentials when you try log the pods, just ssh your node and run systemctl restart kubelet

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sig/cli Categorizes an issue or PR as relevant to SIG CLI.
Projects
None yet
Development

No branches or pull requests