Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SMB volume map disconnecting on windows nodes. #733

Open
rreaplli opened this issue Jan 18, 2024 · 10 comments
Open

SMB volume map disconnecting on windows nodes. #733

rreaplli opened this issue Jan 18, 2024 · 10 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@rreaplli
Copy link

rreaplli commented Jan 18, 2024

What happened:
SMB volume not mounting intermittently to the container getting below errors in the pod events, csi-smb-node-win driver pod logs are showing access denied, volume showing disconnected on the windows worker node, applying the below workaround to fix the issue, looking for permanent solution to fix this problem, please provide the resolution if you come across this issue.

Error:
MountVolume.MountDevice failed for volume "ntfs-logs" : kubernetes.io/csi: attacher.MountDevice failed to create dir "\var\lib\kubelet\plugins\kubernetes.io\csi\smb.csi.k8s.io\4e5012244d1604e40fc127a03220a74836a874f6d38386cf183428b777f34f64\globalmount": mkdir \var\lib\kubelet\plugins\kubernetes.io\csi\smb.csi.k8s.io\4e5012244d1604e40fc127a03220a74836a874f6d38386cf183428b777f34f64\globalmount: Cannot create a file when that file already

Workaround:

  1. To identify the broken SMB connection (Should say disconnected) Run: Net use
  2. Remove the broken share: Remove-SMBGlobalMapping \path
  3. Enter the credentials for the SMB share: $creds = Get-Credential
  4. Create the Mapping: New-SmbGlobalMapping -RemotePath \fs1.contoso.com\public -Credential $creds
    Pods should then be able to connect to the SMB share

What you expected to happen:

volume should map to the container and no disconnection on the SMBGlobalMapping, all of sudden pod crashing dueto the volume mapping disconnected and not able to connect the volume.

How to reproduce it:
it's intermittent issue, whenever we reboot the node it's working fine, not able to reproduce it.

Anything else we need to know?:
we are using the csi-provisioner:v3.5.0 and csi-node-driver-registrar:v2.10.0 are using in our environment.

Environment: dev environment

  • CSI Driver version: v2.10.0
  • Kubernetes version (use kubectl version): v1.26.9+c7606e7
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):windows2022
  • Install tools:
  • Others:
@rreaplli rreaplli changed the title SMB volume mount lock on windows nodes SMB volume mount map disconnecting on windows nodes. Jan 18, 2024
@rreaplli rreaplli changed the title SMB volume mount map disconnecting on windows nodes. SMB volume map disconnecting on windows nodes. Jan 18, 2024
@deedubb
Copy link

deedubb commented Jan 19, 2024

Questions - I don't have the solution - which version are you on, v1.14? And how many smb csi controllers do you have? What's your backend storage?

We had similar problems with v1.12-v1.13, when we had 2 controllers in values.yaml when deployed by helm, with Azure NetApp Files SMB share

After changing to 1 controller on v1.14 we've had no drops but it's only been a day

@rreaplli
Copy link
Author

Thanks for the prompt response @deedubb , kubernetes version is v1.26, one smb controller, netapp is the backend storage.

we have tried the csi-node-driver v2.10.0 but no luck, issue reoccurring after some time.

@rreaplli
Copy link
Author

using the v1.12 in our environment, will try with v1.14

@andyzhangx
Copy link
Member

I don't think that's related to CSI driver version, the SMBGlobalMapping is broken on windows node, that's the root cause.

@deedubb
Copy link

deedubb commented Jan 20, 2024

@andyzhangx could you be more verbose? Can you suggest some configuration or how to troubleshoot and mitigate such problems? What I find annoying is that the share might go offline temporarily, connectivity might get broken; but the share wouldn't re-establish on its own for me. I had to evict the node and provision a new one. I also didn't have a very good node readiness that the share was actually accessible. If we can detect the share is unavailable/gone offline maybe an example of the readiness or liveness could be provided?

@rreaplli
Copy link
Author

Thanks for the reply @andyzhangx and @deedubb, yes exactly SMBGlobalMapping is broken on the node, after we applied below workaround it's working without the node reboot, do we have any permanent fix for this problem?

  1. To identify the broken SMB connection (Should say disconnected) Run: Net use
  2. Remove the broken share: Remove-SMBGlobalMapping \path
  3. Enter the credentials for the SMB share: $creds = Get-Credential
  4. Create the Mapping: New-SmbGlobalMapping -RemotePath \fs1.contoso.com\public -Credential $creds
  5. Pods should then be able to connect to the SMB share

@andyzhangx
Copy link
Member

when the smb mount is broken on the node, the only k8s way is to remove that mount path by deleting the pod and then smb volume mount would happen again. There is no suitable fix from CSI driver side since from CSI driver perspective, after the mount is complete, it does not monitor whether the underlying mount is healthy or not, that's out of CSI driver scope.

I think you need to check why SMBGlobalMapping mount is broken frequently on the node, is there any way for SMBGlobalMapping to recover by itself?

@deedubb
Copy link

deedubb commented Feb 4, 2024

@andyzhangx I appreciate that it might not be in the csi driver's scope; but in the "I use this software package to build and maintain my SMB/cifs share connectivity" mindset, would you have any suggestions for how to monitor the share, repair the share and/or mark the node liveliness as unhealthy?

@andyzhangx
Copy link
Member

@andyzhangx I appreciate that it might not be in the csi driver's scope; but in the "I use this software package to build and maintain my SMB/cifs share connectivity" mindset, would you have any suggestions for how to monitor the share, repair the share and/or mark the node liveliness as unhealthy?

@deedubb if smb volume is invalid on the node, the pod could have some liveness to check the mount path, and crash if smb volume is invalid, and then you may have your operator to start another new pod on the node, and then delete crashing pod.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

5 participants