Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MountVolume.NodeExpandVolume failed error for volume declared as read-only file system #723

Closed
Zombro opened this issue Jan 15, 2024 · 4 comments

Comments

@Zombro
Copy link

Zombro commented Jan 15, 2024

What happened:

Mounting an smb filesystem declared as read-only in the .spec.template.spec.volumes[*] triggers an error in kubelet logs. the scheduling / deployment / filesystem appears to be working, but this event fires:

MountVolume.NodeExpandVolume failed for volume "smb-config" requested read-only file system

This error / event does not fire if the .spec.template.spec.volumes[*].readOnly is omitted.

What you expected to happen:

No errors reported.

How to reproduce it:

Deploy a simple test workload like below. As presented, it works without errors and the mounted filesystem is RO as expected.

Note the .spec.template.spec.volumes[0].persistentVolumeClaim.readOnly is disabled. When this is enabled, the mentioned error / event fires, but the workload still functions.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: smb-ro-mount
spec:
  replicas: 1
  selector:
    matchLabels:
      app: smb-ro-mount
  template:
    metadata:
      labels:
        app: smb-ro-mount
    spec:
      volumes:
        - name: smb-config
          persistentVolumeClaim:
            claimName: smb-config
            # readOnly: true
      containers:
        - name: smb-ro-mount-example
          image: nginx
          volumeMounts:
            - name: smb-config
              readOnly: true
              mountPath: /config
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: smb-config
spec:
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 10Mi
  volumeName: smb-config
  volumeMode: Filesystem
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: smb-config
spec:
  capacity:
    storage: 10Mi
  csi:
    driver: smb2.csi.k8s.io
    volumeHandle: smb-config-a1b2c3
    fsType: ext4
    volumeAttributes:
      createSubDir: "true"
      source: \\smbtest.x.net\K8S\config-demo
    nodeStageSecretRef:
      name: SMB-DEMO-CREDS
      namespace: default
  accessModes:
    - ReadOnlyMany
  persistentVolumeReclaimPolicy: Retain
  mountOptions:
    - dir_mode=0555
    - file_mode=0444
    - vers=3.0
  volumeMode: Filesystem

Environment:

  • CSI Driver version: helm v.1.13.0, image: registry.k8s.io/sig-storage/smbplugin:v1.13.0
  • Kubernetes version: 1.28
  • OS: windows server 2022 & ubuntu 22.04.3
  • Kernel(s): 10.0.20348.2159 & 5.15.0-75-generic
  • Install tools: helm

Parting Thoughts

Maybe this isn't an issue directly with csi-smb-driver, but rather a coupling with kubelet & volume CSI operations. It would be nice if the documentation somewhere pointed out this behavior.

@andyzhangx
Copy link
Member

why is MountVolume.NodeExpandVolume triggered? have you expanded a pvc or pv?

@Zombro
Copy link
Author

Zombro commented Jan 16, 2024

no, have not expanded.

@JYlag
Copy link

JYlag commented Mar 1, 2024

I am also seeing this in some of my filestore logs. Only behavior is that I notice is the filestore works fine but not all the time. Sporadically I have had some mounting issues onto pods that cause pods to get stuck in an init stage but it's not all the time. I am wondering if these are connecting (doesn't seem so) but curious why these logs pop up.

@Zombro
Copy link
Author

Zombro commented May 24, 2024

I think i figured out the ultimate solution here.

  • wrapper helm chart that declares at least one StorageClass whose provisioner references driver smb.csi.k8s.io, and subcharts this csi-driver-smb chart
  • ensure the StorageClass declares allowVolumeExpansion: false
  • ensure any of your dependent csi.smb PVs & PVCs ref the StorageClass

dug through a lot of storage api and csi code to reach this conclusion... then noticed the documentation... https://kubernetes.io/blog/2022/05/05/volume-expansion-ga/#storage-driver-support

Maintainers, could you stuff a StorageClass template and values interface into the helm chart to make our lives easier ?

@Zombro Zombro closed this as completed May 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants