Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Race condition: terminating pod destroys PV mount on new pod #601

Open
HWiese1980 opened this issue Feb 1, 2024 · 3 comments
Open

Race condition: terminating pod destroys PV mount on new pod #601

HWiese1980 opened this issue Feb 1, 2024 · 3 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@HWiese1980
Copy link

What happened:
I seem to have a race condition between pods...

I have a simple debug pod that mounts an NFS share as an RWM PVC/PV using NFS CSI as storage driver/class. The first time I deployed the pod, everything went well.

Now when I delete the pod to start a new one (e.g., to remount the share when export settings on the server side change) the mount goes stale the moment the old pod actually terminated (disappears).

When I set the replica count of the pod deployment to 0, wait for the pod to terminate, set it to 1 again to not have the overlapping pending/terminating pods states but a clean, undisturbed new debug pod, the mount inside the pod remains stable.

What you expected to happen:

The mount inside the new pod should remain stable, even if the old pod terminates and unmounts its own PV/PVC binding (remember: RWM).

How to reproduce it:

  • Create a storage class using an NFS share
  • Create a RWM PVC/PV pair
  • Create a debug pod deployment mounting that PVC/PV somewhere
  • Open a terminal to the newly created pod
  • In the pod that's created right after creating the deployment, the NFS mount should remain stable
  • Delete the debug pod
  • Open a terminal in the new pod
  • Watch the mount working as long as the old pod is in Terminating state
  • Watch the mount go stale as soon as the old pod disappears

Anything else we need to know?:

  • NFS server is haneWIN NFS on a Windows system
  • I'm on a bare-metal K3s cluster consisting of 3 master and 3 worker nodes
  • My nodes run on Debian 11 and Debian 12
  • Some nodes are Hyper-V VMs on my Windows PC
  • Some nodes are actual thin client PCs (1.6GHz AMD CPU, 16GB SODIMM DDR3 RAM)
  • The cluster is working pretty well in total
  • Client Version: v1.28.3 (kubectl)
  • Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
  • Server Version: v1.28.5+k3s1
  • haneWIN version 1.2.67
  • NFSvers = 3
    Storage Class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-csi
provisioner: nfs.csi.k8s.io
parameters:
  server: <my windows host IP>
  share: /some_share
reclaimPolicy: Delete  # might be the culprit? I don't know, I'm a noob
volumeBindingMode: Immediate # might also be the culprit? I don't know... I'm a noob...
mountOptions:
  - nfsvers=3
  - nolock
  - soft
  - rw

Environment:

  • CSI Driver version: 4.6.0
  • Kubernetes version (use kubectl version): 1.28.3
  • OS (e.g. from /etc/os-release): Debian 11/Debian 12, Windows 11 (NFS server host), haneWIN 1.2.67 (NFS server), 3 (NFS)
  • Kernel (e.g. uname -a): vanilla kernel of the corresponding distro/OS. no changes here.
@HWiese1980
Copy link
Author

Please come back to me if you were able to replicate the behavior or know how I can fix it on my side! Thanks!

@andyzhangx
Copy link
Member

the new pod with nfs volume should have a standalone nfs mount to the remote nfs server, if you delete the old pod, the existing nfs mount would be unmounted.

per your description, the existing nfs mount on the node would become stale when you unmount another nfs mount on the same node, I think you could try to repro this issue without using k8s.

and also pls provide nfs csi driver logs on the node, follow by: https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/docs/csi-debug.md#case2-volume-mountunmount-failed

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

4 participants