-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
eksctl delete cluster
should not drain nodes
#7719
Comments
+1 we have short-lived EKS clusters used for testing k8s stuff. We create and destroy them. At least give an option to in |
@rglonek there is already an option for skipping pod eviction by passing |
It skips node eviction, but still runs through the draining process, no? The logs seem to indicate so, unlike when I do |
This is not a bug. Changing this default behavior is a bad idea because "incident" do happens! Explicit use of |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
When deleting a cluster, eksctl drains all nodes before proceeding with nodegroup deletion. This behaviour is of little use and further increases the duration of the command as the workloads will remain unschedulable after pod termination.
Starting with CoreDNS v1.9.3-eksbuild.5 and v1.10.1-eksbuild.2, an addon that is installed by default, a
PodDisruptionBudget
withmaxUnavailable: 1
is also created, preventing the nodes from being drained as attempting to evict more than one CoreDNS pod will violate thePodDisruptionBudget
. While this can be worked around by passing--disable-nodegroup-eviction
,eksctl delete cluster
should work without needing any additional arguments in most cases.We may optionally want to still gracefully terminate pods to give pods a chance to clean up any external resources.
Related: #6287
The text was updated successfully, but these errors were encountered: