You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What happened:
I seem to have a race condition between pods...
I have a simple debug pod that mounts an NFS share as an RWM PVC/PV using NFS CSI as storage driver/class. The first time I deployed the pod, everything went well.
Now when I delete the pod to start a new one (e.g., to remount the share when export settings on the server side change) the mount goes stale the moment the old pod actually terminated (disappears).
When I set the replica count of the pod deployment to 0, wait for the pod to terminate, set it to 1 again to not have the overlapping pending/terminating pods states but a clean, undisturbed new debug pod, the mount inside the pod remains stable.
What you expected to happen:
The mount inside the new pod should remain stable, even if the old pod terminates and unmounts its own PV/PVC binding (remember: RWM).
How to reproduce it:
Create a storage class using an NFS share
Create a RWM PVC/PV pair
Create a debug pod deployment mounting that PVC/PV somewhere
Open a terminal to the newly created pod
In the pod that's created right after creating the deployment, the NFS mount should remain stable
Delete the debug pod
Open a terminal in the new pod
Watch the mount working as long as the old pod is in Terminating state
Watch the mount go stale as soon as the old pod disappears
Anything else we need to know?:
NFS server is haneWIN NFS on a Windows system
I'm on a bare-metal K3s cluster consisting of 3 master and 3 worker nodes
My nodes run on Debian 11 and Debian 12
Some nodes are Hyper-V VMs on my Windows PC
Some nodes are actual thin client PCs (1.6GHz AMD CPU, 16GB SODIMM DDR3 RAM)
apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:
name: nfs-csiprovisioner: nfs.csi.k8s.ioparameters:
server: <my windows host IP>share: /some_sharereclaimPolicy: Delete # might be the culprit? I don't know, I'm a noobvolumeBindingMode: Immediate # might also be the culprit? I don't know... I'm a noob...mountOptions:
- nfsvers=3
- nolock
- soft
- rw
Environment:
CSI Driver version: 4.6.0
Kubernetes version (use kubectl version): 1.28.3
OS (e.g. from /etc/os-release): Debian 11/Debian 12, Windows 11 (NFS server host), haneWIN 1.2.67 (NFS server), 3 (NFS)
Kernel (e.g. uname -a): vanilla kernel of the corresponding distro/OS. no changes here.
The text was updated successfully, but these errors were encountered:
the new pod with nfs volume should have a standalone nfs mount to the remote nfs server, if you delete the old pod, the existing nfs mount would be unmounted.
per your description, the existing nfs mount on the node would become stale when you unmount another nfs mount on the same node, I think you could try to repro this issue without using k8s.
What happened:
I seem to have a race condition between pods...
I have a simple debug pod that mounts an NFS share as an RWM PVC/PV using NFS CSI as storage driver/class. The first time I deployed the pod, everything went well.
Now when I delete the pod to start a new one (e.g., to remount the share when export settings on the server side change) the mount goes stale the moment the old pod actually terminated (disappears).
When I set the replica count of the pod deployment to 0, wait for the pod to terminate, set it to 1 again to not have the overlapping pending/terminating pods states but a clean, undisturbed new debug pod, the mount inside the pod remains stable.
What you expected to happen:
The mount inside the new pod should remain stable, even if the old pod terminates and unmounts its own PV/PVC binding (remember: RWM).
How to reproduce it:
Anything else we need to know?:
Storage Class:
Environment:
kubectl version
): 1.28.3uname -a
): vanilla kernel of the corresponding distro/OS. no changes here.The text was updated successfully, but these errors were encountered: