Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Permission error for getting token #785

Closed
saeid-ir opened this issue Nov 8, 2018 · 11 comments
Closed

Permission error for getting token #785

saeid-ir opened this issue Nov 8, 2018 · 11 comments

Comments

@saeid-ir
Copy link

saeid-ir commented Nov 8, 2018

i have error when worker node try to join master node:

[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
configmaps "kubelet-config-1.12" is forbidden: User "system:bootstrap:2vhny5" cannot get configmaps in the namespace "kube-system"

Any idea?

@saeid-ir
Copy link
Author

saeid-ir commented Nov 8, 2018

try many times with fresh installation.
no succeed.
I really sure that this is a BUG :(.

@jnummelin
Copy link
Contributor

kubelet-config-1.12

that looks bit fishy. :)

Which version of pharos is this on?

We don't support 1.12 yet, sure there's no kube 1.12 components in the system from any previous trials or something?

@saeid-ir
Copy link
Author

saeid-ir commented Nov 8, 2018

I use latest version, 2.0.0
even i reinstall the OS and again errors happened.
the full log is:

==> Join nodes @ 31
    [31] Joining host to the master ...
    [31] got error (Pharos::SSH::RemoteCommand::ExecError): SSH exec failed with code 1: sudo kubeadm join localhost:6443 --token w9a6lu.jr95dhqmqjxr29wh --discovery-token-ca-cert-hash sha256:08f74292297d40340f29f0a98310c3b88c67b4321f07a9aa86c6ff9028123d86 --node-name 31 --ignore-preflight-errors DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors SystemVerification
[preflight] running pre-flight checks
	[WARNING DirAvailable--etc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty
	[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs] or no builtin kernel ipvs support: map[ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

I1108 15:06:02.972045     514 kernel_validator.go:81] Validating kernel version
I1108 15:06:02.972259     514 kernel_validator.go:96] Validating kernel config
	[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.1-ce. Max validated version: 17.03
[discovery] Trying to connect to API Server "localhost:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://localhost:6443"
[discovery] Failed to connect to API Server "localhost:6443": token id "w9a6lu" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "localhost:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://localhost:6443"
[discovery] Failed to connect to API Server "localhost:6443": token id "w9a6lu" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "localhost:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://localhost:6443"
[discovery] Failed to connect to API Server "localhost:6443": token id "w9a6lu" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "localhost:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://localhost:6443"
[discovery] Failed to connect to API Server "localhost:6443": token id "w9a6lu" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "localhost:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://localhost:6443"
[discovery] Failed to connect to API Server "localhost:6443": token id "w9a6lu" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "localhost:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://localhost:6443"
[discovery] Requesting info from "https://localhost:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "localhost:6443"
[discovery] Successfully established connection with API Server "localhost:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
configmaps "kubelet-config-1.12" is forbidden: User "system:bootstrap:w9a6lu" cannot get configmaps in the namespace "kube-system"
    [31] retrying after 1 seconds ...

@jnummelin
Copy link
Contributor

Could you check kubeadm version on the node. 1.11.4 kubeadm should not try to locate 1.12 config.

Does this happen on all nodes?

What OS & OS version on the nodes?

@saeid-ir
Copy link
Author

saeid-ir commented Nov 8, 2018

kubeadm version :

root@31:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.4", GitCommit:"bf9a868e8ea3d3a8fa53cbb22f566771b3f8068b", GitTreeState:"clean", BuildDate:"2018-10-25T19:13:39Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

And all configmaps are:

root@185:~# kubectl get configmaps --all-namespaces
NAMESPACE     NAME                                 DATA   AGE
kube-public   cluster-info                         7      7h
kube-system   coredns                              1      7h
kube-system   extension-apiserver-authentication   6      7h
kube-system   kube-proxy                           2      7h
kube-system   kubeadm-config                       1      7h
kube-system   kubelet-config-1.11                  1      7h
kube-system   weave-net                            0      7h
root@185:~#

@saeid-ir
Copy link
Author

saeid-ir commented Nov 8, 2018

Same problem: here
I think it might be a kubelet version mismatch.
kubeadm is 1.11.4 on both master and worker but the kubelet is 1.12.2

@jnummelin
Copy link
Contributor

How kubelet got into that version? Pharos should really pin it to correct version. What OS does the nodes have?

@saeid-ir
Copy link
Author

saeid-ir commented Nov 9, 2018

Finally i solved it.
One of nodes keep the version 1.12.2 of the kubelet even when i purge them all with apt.
It's a good idea to check if another kubelet binary exist in /usr/local/bin or not, for being sure setup will follow the exact commands.

@saeid-ir saeid-ir closed this as completed Nov 9, 2018
@jnummelin
Copy link
Contributor

@saeidakbari What OS did the problematic node have? We need to know to be able to bake in logic that would detect this situation and fail more "safely".

@saeid-ir
Copy link
Author

saeid-ir commented Nov 9, 2018

I use Ubuntu 18.04.1 LTS.

@jnummelin
Copy link
Contributor

Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants