Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stale NFS file handle #419

Open
alexyao2015 opened this issue Feb 10, 2022 · 14 comments
Open

Stale NFS file handle #419

alexyao2015 opened this issue Feb 10, 2022 · 14 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@alexyao2015
Copy link

alexyao2015 commented Feb 10, 2022

What happened:

When the smb server is terminated while still mounted to pods, the csi driver node does not umount the stale mount and create a new mount. This prevents new pods from starting even when the server is back online. Instead an error results. This can be resolved manually by restarting all the csi-smb-node pods.

Error: failed to generate container "88d821046af27afe3710f0ffc413529b7eb46f2844156cfb154099dc94d75984"
spec: failed to generate spec: failed to stat "/var/lib/kubelet/pods/7244575c-5464-4faa-98f1-a80a2366287f/volumes/kubernetes.io~csi/smb-pv/mount": 
stat /var/lib/kubelet/pods/7244575c-5464-4faa-98f1-a80a2366287f/volumes/kubernetes.io~csi/smb-pv/mount: stale NFS file handle

What you expected to happen:

Not have to restart the csi-smb-node pod manually to mount.

How to reproduce it:

  1. Start pod with smb mount.
  2. Kill smb server.
  3. Restart pod (shouldn't attach because server is down).
  4. Start smb server.
  5. Observe pod is never able to start even when server is back up.

Anything else we need to know?:

This may be related to #164, but it is showing a different error message.

Environment:

  • CSI Driver version: v1.5.0
  • Kubernetes version (use kubectl version): v1.20
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:
@andyzhangx
Copy link
Member

the error shows stale NFS file handle while this is smb server

@alexyao2015
Copy link
Author

alexyao2015 commented Feb 11, 2022

Yes, this is correct. It's perplexing why it's showing stale nfs when it is smb. I am definitely using the smb csi driver and not NFS.

@jrbe228
Copy link

jrbe228 commented Apr 25, 2022

I have the same issue using the smb csi driver. For some reason the error "stale NFS file handle" only affects Linux pods trying to mount the smb share. Windows pods show no error message and continue to mount the share successfully after smb server restart.

Another issue - restarting csi-smb-node pods didn't seem to fix the problem...

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 24, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 23, 2022
@alexyao2015
Copy link
Author

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Aug 24, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 22, 2022
@andyzhangx andyzhangx removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 5, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 5, 2023
@alexyao2015
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 5, 2023
@rmoreas
Copy link

rmoreas commented Apr 18, 2023

Restarting csi-smb-node pods didn't recover the stale mount for us either.

Is there another way to recover from this issue without rebooting the node?

@ThomVivet
Copy link

From time to time we have the same stale NFS file handle error message when restarting a pod that use SMB mounts.
I found a workaround by a scaling down/up the application... I know it's not good for production environments.
Also I noticed that the same share is mounted twice:

  • once into the pod: /var/lib/kubelet/pods/xxx/volumes/kubernetes.io~csi/xxx/mount
  • and another into the driver: /var/lib/kubelet/plugins/kubernetes.io/csi/smb.csi.k8s.io/xxx/globalmount

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 21, 2024
@DarkFM
Copy link

DarkFM commented Feb 13, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 13, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

8 participants