New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some node labels in the config file (such as node-role.kubernetes.io/worker) cause kind create cluster
to fail with an unfriendly error
#3536
Comments
kubeadm (used by kind) enables this controller:
|
Thanks @neolit123 that explains what's causing the failure.
|
Yes, at this point we should add validation to fail early on attempting to set kubelet disallowed labels. At this point it exists in all kind-supported releases (it hasn't always been the case).
We should not attempt to circumvent the controls in Kubernetes. KIND is all about conformant Kubernetes and these controls exist for a reason, it is because the API namespace is owned by the Kubernetes project and only expected, API approved usage should exist. But instead, you can add some other label to your nodes like We likely won't do something like that out of the box because again, that would actively encourage workloads to not be based on conformant Kubernetes. Which: In general you shouldn't need to use this label? The control planes will be tainted for scheduling purposes. If you're just listing nodes interactively, |
the ticket exists |
i forgot we added this section: |
We should also link to that from the kind docs, I thought we had but we clearly aren't: |
I attempted to use this feature to set a meaningful role name for the worker nodes.
Here's a demonstration of manually labelling the nodes and the effect it has on the output of
kubectl get nodes
But with the following config file,
kind create cluster
just crashes after a number of minutes.Originally posted by @wallrj in #1926 (comment)
The text was updated successfully, but these errors were encountered: