New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SMB volume map disconnecting on windows nodes. #733
Comments
Questions - I don't have the solution - which version are you on, v1.14? And how many smb csi controllers do you have? What's your backend storage? We had similar problems with v1.12-v1.13, when we had 2 controllers in values.yaml when deployed by helm, with Azure NetApp Files SMB share After changing to 1 controller on v1.14 we've had no drops but it's only been a day |
Thanks for the prompt response @deedubb , kubernetes version is v1.26, one smb controller, netapp is the backend storage. we have tried the csi-node-driver v2.10.0 but no luck, issue reoccurring after some time. |
using the v1.12 in our environment, will try with v1.14 |
I don't think that's related to CSI driver version, the SMBGlobalMapping is broken on windows node, that's the root cause. |
@andyzhangx could you be more verbose? Can you suggest some configuration or how to troubleshoot and mitigate such problems? What I find annoying is that the share might go offline temporarily, connectivity might get broken; but the share wouldn't re-establish on its own for me. I had to evict the node and provision a new one. I also didn't have a very good node readiness that the share was actually accessible. If we can detect the share is unavailable/gone offline maybe an example of the readiness or liveness could be provided? |
Thanks for the reply @andyzhangx and @deedubb, yes exactly SMBGlobalMapping is broken on the node, after we applied below workaround it's working without the node reboot, do we have any permanent fix for this problem?
|
when the smb mount is broken on the node, the only k8s way is to remove that mount path by deleting the pod and then smb volume mount would happen again. There is no suitable fix from CSI driver side since from CSI driver perspective, after the mount is complete, it does not monitor whether the underlying mount is healthy or not, that's out of CSI driver scope. I think you need to check why SMBGlobalMapping mount is broken frequently on the node, is there any way for SMBGlobalMapping to recover by itself? |
@andyzhangx I appreciate that it might not be in the csi driver's scope; but in the "I use this software package to build and maintain my SMB/cifs share connectivity" mindset, would you have any suggestions for how to monitor the share, repair the share and/or mark the node liveliness as unhealthy? |
@deedubb if smb volume is invalid on the node, the pod could have some liveness to check the mount path, and crash if smb volume is invalid, and then you may have your operator to start another new pod on the node, and then delete crashing pod. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
What happened:
SMB volume not mounting intermittently to the container getting below errors in the pod events, csi-smb-node-win driver pod logs are showing access denied, volume showing disconnected on the windows worker node, applying the below workaround to fix the issue, looking for permanent solution to fix this problem, please provide the resolution if you come across this issue.
Error:
MountVolume.MountDevice failed for volume "ntfs-logs" : kubernetes.io/csi: attacher.MountDevice failed to create dir "\var\lib\kubelet\plugins\kubernetes.io\csi\smb.csi.k8s.io\4e5012244d1604e40fc127a03220a74836a874f6d38386cf183428b777f34f64\globalmount": mkdir \var\lib\kubelet\plugins\kubernetes.io\csi\smb.csi.k8s.io\4e5012244d1604e40fc127a03220a74836a874f6d38386cf183428b777f34f64\globalmount: Cannot create a file when that file already
Workaround:
Pods should then be able to connect to the SMB share
What you expected to happen:
volume should map to the container and no disconnection on the SMBGlobalMapping, all of sudden pod crashing dueto the volume mapping disconnected and not able to connect the volume.
How to reproduce it:
it's intermittent issue, whenever we reboot the node it's working fine, not able to reproduce it.
Anything else we need to know?:
we are using the csi-provisioner:v3.5.0 and csi-node-driver-registrar:v2.10.0 are using in our environment.
Environment: dev environment
kubectl version
): v1.26.9+c7606e7uname -a
):windows2022The text was updated successfully, but these errors were encountered: