You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In response to a gRPC /csi.v1.Node/NodeStageVolume request from the csi plugin, the volume mount fails with the following error.
failed to mount volume 0001-0024-1d88c854-9fa3-4806-b80f-5bbd29e03756-0000000000000002-5b9d7afa-c0c9-46a0-8ac4-2f60aea1dd9f: an error (exit status 1) occurred while running modprobe args: [ceph] Check dmesg logs if required.
GRPC error: rpc error: code = Internal desc = an error (exit status 1) occurred while running modprobe args: [ceph]
When I checked dmesg, I found the log Invalid ELF header magic: != \x7fELF
I hope this isn't a bug, but it seems to be out of my control.
Environment details
Image/version of Ceph CSI driver : docker image csi-node-driver-registrar:v2.9.3, cephcsi:canary
Helm chart version : N/A
Kernel version : 6.2.16-3-pve (proxmox)
Mounter used for mounting PVC (for cephFS its fuse or kernel. for rbd its krbd or rbd-nbd) : kernel
Kubernetes cluster version : v1.29.4+k3s1
Ceph cluster version : 17.2.7
Steps to reproduce
I deployed the csi plugin and cephFS driver using this manual as a reference.
I created and deployed a storageclass that uses the cephfs.csi.ceph.com provisioner.
I created a PVC that uses that storageclass.
The provisioner works fine. All PVCs are bound.
Actual results
Pods attempting to mount the volume received the following error message from the kubelet
Warning FailedMount 6m9s (x18 over 26m) kubelet MountVolume.MountDevice failed for volume "pvc-2c77ab9f-d45b-4dde-a548-b9db686aaf7a" : rpc error: code = Internal desc = an error (exit status 1) occurred while running modprobe args: [ceph]
I found the following message from the cephfs plugin pod.
Interestingly, it mounts successfully on other K8S clusters using the same Ceph cluster. I was able to check the logs from that cephfs plugin.
The storageclass and configmap (config.json) of the k8s cluster where the error occurs, and the k8s cluster that is working correctly, match completely.
@Madhu-1 I don't think so. The volume mounts fine on another k8s cluster run as same version of kubelet.
I haven't tried to load the ceph module manually, I just use built-in ceph module on proxmox. I'll try to change version of driver/csi images...
@gotoweb, the Ceph-CSI driver loads the kernel module that is provided by a host-path volume. If the module is already loaded (or built-in) , it should not try to load it again.
Commit ab87045 checks for the support of the cephfs filesystem, it is included in Ceph-CSI 3.11 and was backported to 3.10 with #4381.
Describe the bug
In response to a gRPC
/csi.v1.Node/NodeStageVolume
request from the csi plugin, the volume mount fails with the following error.When I checked dmesg, I found the log
Invalid ELF header magic: != \x7fELF
I hope this isn't a bug, but it seems to be out of my control.
Environment details
csi-node-driver-registrar:v2.9.3
,cephcsi:canary
fuse
orkernel
. for rbd itskrbd
orrbd-nbd
) : kernelSteps to reproduce
cephfs.csi.ceph.com
provisioner.Actual results
Pods attempting to mount the volume received the following error message from the kubelet
I found the following message from the cephfs plugin pod.
The dmesg log looks like this:
Expected behavior
Interestingly, it mounts successfully on other K8S clusters using the same Ceph cluster. I was able to check the logs from that cephfs plugin.
The storageclass and configmap (config.json) of the k8s cluster where the error occurs, and the k8s cluster that is working correctly, match completely.
The text was updated successfully, but these errors were encountered: