-
Notifications
You must be signed in to change notification settings - Fork 596
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[occm] Add taint out-of-service:NoExecute when node shutdown #2556
Comments
Nobody interested by this feature ? |
Looks like CAPZ creates Informer for Nodes objects to populate its Node cache and that's how taint manipulations are done. We would need to do the same here, InstancesV2 doesn't really have an interface allowing that. I think I'm supportive of the feature, though wonder if this is exactly a responsibility of CPO and not CAPO. @mdbooth, @zetaab, what do you think? |
According to [1] this taint should be set manually by the user, which makes me question even more if its place is in CPO. [1] https://kubernetes.io/docs/reference/labels-annotations-taints/#node-kubernetes-io-out-of-service |
How should we handle non graceful shutdown on vanilla clusters ? How to automatically recover StatefulSet and pvc detach ? Node deletion from the cluster and Iac provider seems to be the only way for the moment with this mannualy applied taint. |
https://github.com/kubernetes/cloud-provider-openstack/blob/master/pkg/openstack/instancesv2.go#L102-L103 this is what you are trying to find? actually that adds taint already https://github.com/kubernetes/cloud-provider/blob/71c6ce7b171ddf09943cb588f6a9427dd3282042/controllers/nodelifecycle/node_lifecycle_controller.go#L190
the detach / attach logic is not coming from this repository for the CSI part. So if that taint is not enough, I would say its better to fix things outside of just openstack. It does not make sense that every kubernetes cloudprovider should do their own magic, instead of fixing it in central place (cloud-provider/csi core). |
agree on @zetaab ,if there' something need to be done at CSI/cloud provider side, we need ask them to handle |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/close |
@sebastienmusso: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
I didn't see this first time round. I do actually think we could potentially add something like this, but the feature as implemented by Azure looks dangerous to me and we should not do that. The 'Fenced' in this context means:
These are because there can be no possibility that the node will access shared data while fenced. An example of where the first is important: there is some control plane malfunction which does not affect the data plane. e.g. Kubelet is crash looping, but the pods it previously started are still running and continuing to access shared data. For the second, consider any situation which may cause the Node to become unresponsive for a period of time before starting again. For example, the Node may be not ready because somebody bumped a cable or misconfigured a switch, or a CPU lockup caused the node to become completely unresponsive for several minutes. The Node can recover from these and become Ready again. We mark it as fenced because we are going to perform actions which will compromise data integrity if the fenced node continues to access data. The Azure code appears to fail this safety test on both counts:
I have thought for a while that cloud-providers could implement a useful fencing API, though. A 'recovery agent' could set a 'fence request' on the Node. The cloud-provider could then shut the Node down and take cloud-specific actions to ensure it does not restart without a deliberate action, and then indicate that the Node is fenced. At this point either the cloud provider could or the 'recovery agent' could safely set |
/kind feature
What happened:
Actually only node.cloudprovider.kubernetes.io/shutdown:NoSchedule taint is added to shutdown node.
pod with pvc still int terminating state
What you expected to happen:
When instance is shut taint node.kubernetes.io/out-of-service:NoExecute should be added to node in order to unmap and detach volume
azure cloud provider seems to update taint (https://github.com/kubernetes-sigs/cloud-provider-azure/blob/v1.29.0/pkg/provider/azure.go#L1319C18-L1319C33)
Could the openstack cloud provider handle this function and how ?
Does this code should go into openstack.go package or instances one (https://github.com/kubernetes/cloud-provider-openstack/blob/master/pkg/openstack/openstack.go)
I could made the pr if needed
Environment:
openstack-cloud-controller-manager version: any
OpenStack version: any
Kubernetes version 1.24+
The text was updated successfully, but these errors were encountered: