-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PV is stuck at terminating after PVC is deleted #69697
Comments
Several questions:
|
/sig storage |
It looks like you PV still has a finalizer from the attacher. Can you verify that the volume got successfully detached from the node? |
It may be good to get logs from the external-attacher, and also the AD controller |
cc @jsafrane |
What version of the external-attacher are you using? |
it's v0.3.0. And all the other side cars are in v0.3.0 as well. I was using v0.4.0 earlier and this issue happen after I recreate the side cars in v0.3.0. |
Updated the description with attacher log |
The volume should be detached successfully. Since it is successfully deleted from AWS (don't think it could be deleted without detaching). Also verified on the node that the device is gone using |
It looks like the volume was marked for deletion before an attach ever succeeded. Maybe there is some bug with handling that scenario. Do you still see a VolumeAttachment object? |
How can I check this? |
kubectl get volumeattachment |
Yep. Its still there:
|
Reading the logs, it seems like A/D controller tried to attach the volume and got error from external attacher. Why it did not delete the VolumeAttachment afterwards? Do you still have a pod that uses the volume? If so, it blocks PV deletion. |
There is no pod uses the volume. And PVC is gone also. How can I find A/D controller log? |
It's on the master node, controller-manager.log. You can try to filter by searching for the volume name. |
Here is the controller log:
The last line got repeated infinitely. |
I encountered this issue two more times now. All on v1.12 |
I got rid of this issue by performing the following actions:
Then I manually edited the
|
Answer of @chandraprakash1392 is still valid when |
Removing the finalizers is just a workaround. @bertinatto @leakingtapan could you help repro this issue and save detailed CSI driver and controller-manager logs? |
examples removing for finalizers
then you can delete them !!! IMPORTANT !!!: |
I managed to reproduce it after a few tries, although the log messages seem a bit different from the ones reported by @leakingtapan: Plugin (provisioner): https://gist.github.com/bertinatto/16f5c1f76b1c2577cd66dbedfa4e0c7c |
Same here, had to delete the finalizer, here's a describe for the pv:
|
@jsafrane I only have one pod, and I delete the PVC after Pod is deleted |
No, when this happens to my PV, the pod no longer exists.
On Friday, September 25, 2020, 04:26:04 AM CDT, lsambolino <notifications@github.com> wrote:
Solved the issue of PVC and PV stucked in "terminating" state by deleting the pod using it.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
I had multiple pvcs stuck in terminating status. Thanks for posting these commands to get rid of the pvc |
@wolfewicz @DMXGuru if pods are deleted, pvc should not stuck in terminating states. User should not need to remove finalizer manually. |
How and what details would you like? The kubectl commands and output showing this behavior and then a kubectl describe and kubectl get -o yaml for the resultant PV?
…Sent from my iPhone
On Oct 8, 2020, at 14:30, Jing Xu ***@***.***> wrote:
@wolfewicz @DMXGuru if pods are deleted, pvc should not stuck in terminating states. User should not need to remove finalizer manually.
Could you reproduce your case and give some details here, so that we can help triage?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
@DMXGuru first thing I want to verify is that there are no pods running and no VolumeSnapshots are taken during PVC/PV terminating. kubectl describe pod | grep ClaimName Second, could you describe in what sequence did you perform pod or pvc deletion? Thanks! |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-contributor-experience at kubernetes/community. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@misanthropicat: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
So, still no solution? I hit by this too. |
We've hit what seems to be the same problem using Amazon EBS (we have all the symptoms at least). Should this be raised as a new issue? |
/reopen |
@jleni: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Still happening with 1.21. pv's stuck in terminating, after deleting corresponding pods, pv's and volumeattachments. Storage driver is Longhorn.
|
Encountered this error in v1.21.4 too. @chandraprakash1392 's guide worked! Although, I have been delete and creating a high number of PV and PVCs in a short span of time. Maybe something gets bottlenecked and results in this bug? |
Encountered this as well on 1.18 with ebs.. |
A more handsfree option in case if someone finds useful
Finding pvc's which are in Terminating and patching them in a loop
Check to see if anything still remains
|
|
@getkub , the code you gave patches all PVs, the My suggestion: # Loop over all PV names
for mypv in $(kubectl get pv -o jsonpath="{.items[*].metadata.name}");
do
# If the Status of a given PV isn't Terminating, skip to the next one
if [ -z $(kubectl get pv $mypv -o jsonpath="{.status.phase}" | grep Terminating) ] ; then continue ; fi
# Patch the PV in Terminating Status
kubectl patch pv $mypv -p "{\"metadata\":{\"finalizers\":null}}"
done And you should also take care of the Lost PVCs which were Bound to those PVs |
Apologies for responding to a closed ticket but I noticed a similar issue and have some questions. was this ever solved? I experienced similar behaviour using the cinder driver and OpenStack for storage.
I am not sure if this is as designed and if not, if it should be reported as a bug/improvement here or at OpenStack. |
Removing the finalizer worked for me. |
Earlier comments seem to point at the right answer: delete the pod using it. Reposting links to those comments for better visibility: |
I did that. That’s what stuck the pv. I’ve given up on k8s and gone to proxmox. Sent from my iPhoneOn Apr 9, 2024, at 08:22, brianbraunstein ***@***.***> wrote:
Earlier comments seem to point at the right answer: delete the pod using it.
Reposting links to those comments for better visibility:
#69697 (comment) from @lsambolino
#69697 (comment) from @jsafrane
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
I was testing EBS CSI driver. I created a PV using PVC. Then I deleted the PVC. However, PV deletion is stuck in
Terminating
state. Both PVC and volume is deleted without any issue. CSI driver is kept being called withDeleteVolume
even if it returns success when volume not found (because it is already gone).CSI Driver log:
external attacher log:
Storageclass:
Claim:
What you expected to happen:
After PVC is deleted, PV should be deleted along with the EBS volume (since my Reclaim Policy is delete)
How to reproduce it (as minimally and precisely as possible):
Non-deterministic so far
Anything else we need to know?:
Environment:
kubectl version
): client: v1.12.0 server: v1.12.1uname -a
):The text was updated successfully, but these errors were encountered: