Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some node labels in the config file (such as node-role.kubernetes.io/worker) cause kind create cluster to fail with an unfriendly error #3536

Open
wallrj opened this issue Feb 28, 2024 · 6 comments

Comments

@wallrj
Copy link

wallrj commented Feb 28, 2024

          Some node labels in the config file (such as node-role.kubernetes.io/worker)  cause `kind create cluster` to fail with an unfriendly error

I attempted to use this feature to set a meaningful role name for the worker nodes.
Here's a demonstration of manually labelling the nodes and the effect it has on the output of kubectl get nodes

$ kubectl get nodes
NAME                 STATUS   ROLES           AGE   VERSION
kind-control-plane   Ready    control-plane   73s   v1.27.3
kind-worker          Ready    <none>          49s   v1.27.3
kind-worker2         Ready    <none>          49s   v1.27.3

$ kubectl label node kind-worker node-role.kubernetes.io/platform=
node/kind-worker labeled

$ kubectl get nodes
NAME                 STATUS   ROLES           AGE   VERSION
kind-control-plane   Ready    control-plane   97s   v1.27.3
kind-worker          Ready    platform        73s   v1.27.3
kind-worker2         Ready    <none>          73s   v1.27.3

But with the following config file, kind create cluster just crashes after a number of minutes.

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
  labels:
    node-role.kubernetes.io/worker: ""
$ kind create cluster --config kind.config.yaml
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.29.2) 🖼
 ✓ Preparing nodes 📦 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
 ✗ Joining worker nodes 🚜
Deleted nodes: ["kind-control-plane" "kind-worker"]
ERROR: failed to create cluster: failed to join node with kubeadm: command "docker exec --privileged kind-worker kubeadm join --config /kind/kubeadm.conf --skip-phases=preflight --v=6" failed with error: exit status 1
Command Output: I0228 13:43:06.510544     140 join.go:413] [preflight] found NodeName empty; using OS hostname as NodeName
I0228 13:43:06.510617     140 joinconfiguration.go:76] loading configuration from "/kind/kubeadm.conf"
I0228 13:43:06.511206     140 controlplaneprepare.go:225] [download-certs] Skipping certs download
I0228 13:43:06.511234     140 join.go:532] [preflight] Discovering cluster-info
I0228 13:43:06.511247     140 token.go:80] [discovery] Created cluster-info discovery client, requesting info from "kind-control-plane:6443"
I0228 13:43:06.517396     140 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s 200 OK in 5 milliseconds
I0228 13:43:06.517665     140 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "abcdef", will try again
I0228 13:43:12.610900     140 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s 200 OK in 5 milliseconds
I0228 13:43:12.611048     140 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "abcdef", will try again
I0228 13:43:18.173233     140 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s 200 OK in 4 milliseconds
I0228 13:43:18.173408     140 token.go:223] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "abcdef", will try again
I0228 13:43:23.728172     140 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s 200 OK in 7 milliseconds
I0228 13:43:23.729774     140 token.go:105] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "kind-control-plane:6443"
I0228 13:43:23.729817     140 discovery.go:52] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process
I0228 13:43:23.729831     140 join.go:546] [preflight] Fetching init configuration
I0228 13:43:23.729836     140 join.go:592] [preflight] Retrieving KubeConfig objects
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
I0228 13:43:23.739778     140 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s 200 OK in 9 milliseconds
I0228 13:43:23.740952     140 kubeproxy.go:55] attempting to download the KubeProxyConfiguration from ConfigMap "kube-proxy"
I0228 13:43:23.744918     140 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy?timeout=10s 200 OK in 3 milliseconds
I0228 13:43:23.746979     140 kubelet.go:74] attempting to download the KubeletConfiguration from ConfigMap "kubelet-config"
I0228 13:43:23.750225     140 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config?timeout=10s 200 OK in 2 milliseconds
I0228 13:43:23.756648     140 initconfiguration.go:114] skip CRI socket detection, fill with the default CRI socket unix:///var/run/containerd/containerd.sock
I0228 13:43:23.756874     140 interface.go:432] Looking for default routes with IPv4 addresses
I0228 13:43:23.756911     140 interface.go:437] Default route transits interface "eth0"
I0228 13:43:23.756998     140 interface.go:209] Interface eth0 is up
I0228 13:43:23.757038     140 interface.go:257] Interface "eth0" has 3 addresses :[172.18.0.2/16 fc00:f853:ccd:e793::2/64 fe80::42:acff:fe12:2/64].
I0228 13:43:23.757055     140 interface.go:224] Checking addr  172.18.0.2/16.
I0228 13:43:23.757066     140 interface.go:231] IP found 172.18.0.2
I0228 13:43:23.757076     140 interface.go:263] Found valid IPv4 address 172.18.0.2 for interface "eth0".
I0228 13:43:23.757085     140 interface.go:443] Found active IP 172.18.0.2
I0228 13:43:23.763621     140 kubelet.go:121] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf
I0228 13:43:23.764767     140 kubelet.go:136] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt
I0228 13:43:23.765328     140 loader.go:395] Config loaded from file:  /etc/kubernetes/bootstrap-kubelet.conf
I0228 13:43:23.765713     140 kubelet.go:157] [kubelet-start] Checking for an existing Node in the cluster with name "kind-worker" and status "Ready"
I0228 13:43:23.768761     140 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/nodes/kind-worker?timeout=10s 404 Not Found in 2 milliseconds
I0228 13:43:23.769169     140 kubelet.go:172] [kubelet-start] Stopping the kubelet
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'
timed out waiting for the condition
error execution phase kubelet-start
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:260
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdJoin.func1
        cmd/kubeadm/app/cmd/join.go:180
github.com/spf13/cobra.(*Command).execute
        vendor/github.com/spf13/cobra/command.go:940
github.com/spf13/cobra.(*Command).ExecuteC
        vendor/github.com/spf13/cobra/command.go:1068
github.com/spf13/cobra.(*Command).Execute
        vendor/github.com/spf13/cobra/command.go:992
k8s.io/kubernetes/cmd/kubeadm/app.Run
        cmd/kubeadm/app/kubeadm.go:50
main.main
        cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:267
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1650

Originally posted by @wallrj in #1926 (comment)

@neolit123
Copy link
Member

kubeadm (used by kind) enables this controller:
https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#noderestriction

The NodeRestriction admission plugin prevents kubelets from deleting their Node API object, and enforces kubelet modification of labels under the kubernetes.io/ or k8s.io/ prefixes as follows:

node-role.kubernetes.io/* are not allowed labels for a kubelet to self-apply.

https://github.com/kubernetes-sigs/kind/pull/1926/files#diff-1e5fea907b390b2fe568c10ed0b138b9fa155a79b4b0b140bbd1daf6562da6c3R340

@wallrj
Copy link
Author

wallrj commented Feb 28, 2024

Thanks @neolit123 that explains what's causing the failure.

  1. I'd like it if Kind would add the node-role.kubernetes.io/${ROLE_NAME}: "" to all the nodes so that kubectl get nodes would show the roles of all nodes in a multi-node cluster (not just control-plane).
  2. If Kind is going to continue to use the kubelet to apply the node labels, I'd like if Kind would fail early with a clear error message if I choose labels that are going to be rejected by the noderestriction controller.
  3. OR Kind could allow any labels to be supplied in the Node config and apply them in some other way, to make life simpler for people who want to use a multi-node Kind cluster for testing node affinity settings.
  4. It would be great if I could also supply Node taints in the config file, so that I could test tolerations in my applications.
  5. And perhaps I should create an issue to update that page on the Kubernetes website, because I it mislead me into thinking that node-role.kubernetes.io would be allowed. Right now it specifically says:

The NodeRestriction admission plugin prevents kubelets from deleting their Node API object, and enforces kubelet modification of labels under the kubernetes.io/ or k8s.io/ prefixes as follows:

Prevents kubelets from adding/removing/updating labels with a node-restriction.kubernetes.io/ prefix. This label prefix is reserved for administrators to label their Node objects for workload isolation purposes, and kubelets will not be allowed to modify labels with that prefix.

Use of any other labels under the kubernetes.io or k8s.io prefixes by kubelets is reserved, and may be disallowed or allowed by the NodeRestriction admission plugin in the future.

@BenTheElder
Copy link
Member

If Kind is going to continue to use the kubelet to apply the node labels, I'd like if Kind would fail early with a clear error message if I choose labels that are going to be rejected by the noderestriction controller.

Yes, at this point we should add validation to fail early on attempting to set kubelet disallowed labels.

At this point it exists in all kind-supported releases (it hasn't always been the case).

OR Kind could allow any labels to be supplied in the Node config and apply them in some other way, to make life simpler for people who want to use a multi-node Kind cluster for testing node affinity settings.

We should not attempt to circumvent the controls in Kubernetes. KIND is all about conformant Kubernetes and these controls exist for a reason, it is because the API namespace is owned by the Kubernetes project and only expected, API approved usage should exist.

But instead, you can add some other label to your nodes like foo.dev/role

We likely won't do something like that out of the box because again, that would actively encourage workloads to not be based on conformant Kubernetes.

Which: In general you shouldn't need to use this label? The control planes will be tainted for scheduling purposes.

If you're just listing nodes interactively, kubect get nodes will have the roles in the names of all the kind nodes.

@neolit123
Copy link
Member

5. And perhaps I should create an issue to update that page on the Kubernetes website, because I it mislead me into thinking that node-role.kubernetes.io would be allowed. Right now it specifically says:

the ticket exists
kubernetes/website#31992

@neolit123
Copy link
Member

i forgot we added this section:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#managed-node-labels

@BenTheElder
Copy link
Member

We should also link to that from the kind docs, I thought we had but we clearly aren't:
https://kind.sigs.k8s.io/docs/user/configuration/#extra-labels

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants