Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MountVolume.SetUp fails for pv created on Dell ME5 #104

Open
matmitch opened this issue Mar 19, 2024 · 2 comments
Open

MountVolume.SetUp fails for pv created on Dell ME5 #104

matmitch opened this issue Mar 19, 2024 · 2 comments
Assignees

Comments

@matmitch
Copy link

Hello,

I installed the seagate exos-x-csi driver on my rke2 cluster (deployed on ubuntu 22.04 in a proxmox cluster).
The provisioning of volume one the Dell Me5 Disk Bay is successful, the pv and pvc are bound. But when trying to mount the pvc in a pod i get following error:

Events:
  Type     Reason       Age                   From     Message
  ----     ------       ----                  ----     -------
  Warning  FailedMount  16m (x243 over 16h)   kubelet  MountVolume.SetUp failed for volume "pvc-8396f42b-f5ff-45a9-8ec0-606b8a80cc46" : rpc error: code = DeadlineExceeded desc = context deadline exceeded
  Warning  FailedMount  2m9s (x434 over 16h)  kubelet  Unable to attach or mount volumes: unmounted volumes=[volume], unattached volumes=[], failed to process volumes=[]: timed out waiting for the condition

my pv config:

ApiVersion: v1
kind: PersistentVolume
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: claim
    namespace: product
    resourceVersion: "24041421"
    uid: 8396f42b-f5ff-45a9-8ec0-606b8a80cc46
  csi:
    controllerExpandSecretRef:
      name: secret-storage-hubsaas
      namespace: hub-saas-storage
    controllerPublishSecretRef:
      name: secret-storage-hubsaas
      namespace: hub-saas-storage
    driver: csi-exos-x.seagate.com
    fsType: ext4
    volumeAttributes:
      fsType: ext4
      iqn: iqn.1988-11.com.dell:01.array.bc305b5dd35b
      pool: A
      portals: 10.14.11.201,10.14.11.202,10.14.11.203,10.14.12.201,10.14.12.202,10.14.12.203
      storage.kubernetes.io/csiProvisionerIdentity: 1710519786265-8081-csi-exos-x.seagate.com
      storageProtocol: iscsi
      volPrefix: QUA
    volumeHandle: QUA_42bf5ff45a98ec0606b8a80cc46##iscsi##600c0ff0006e1181977ff86501000000
  persistentVolumeReclaimPolicy: Retain
  storageClassName: hub-saas-storage
  volumeMode: Filesystem
status:
  phase: Bound

My pvc config:

apiVersion: v1
kind: PersistentVolumeClaim
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: hub-saas-storage
  volumeMode: Filesystem
  volumeName: pvc-8396f42b-f5ff-45a9-8ec0-606b8a80cc46
status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Gi
  phase: Bound

logs from the csi controller:

0318 15:58:08.724983       1 controller.go:250] "using API" address="http://10.14.1.231"
I0318 15:58:10.742368       1 mc.go:103] "++ MC Login SUCCESS" ipaddress="10.14.1.231" protocol="http"
I0318 15:58:10.742388       1 controller.go:267] login was successful
I0318 15:58:19.678838       1 system.go:141] 
I0318 15:58:19.678851       1 system.go:142] System Information:
I0318 15:58:19.678856       1 system.go:144] 
I0318 15:58:19.678859       1 system.go:145] === Controller ===
I0318 15:58:19.678863       1 system.go:146] IPAddress:     10.14.1.231
I0318 15:58:19.678868       1 system.go:147] Protocol:      http://
I0318 15:58:19.678872       1 system.go:148] Controller:    A
I0318 15:58:19.678878       1 system.go:149] Platform:      Indium LX2
I0318 15:58:19.678882       1 system.go:150] SerialNumber:  CN0K3F8WSXXXXXXXX
I0318 15:58:19.678887       1 system.go:151] Status:        Opérationnel
I0318 15:58:19.678892       1 system.go:152] MCCodeVersion: IXM200R009-01
I0318 15:58:19.678897       1 system.go:153] MCBaseVersion: IXM200R009-01
I0318 15:58:19.678902       1 system.go:155] 
I0318 15:58:19.678908       1 system.go:156] === Ports ===
I0318 15:58:19.678914       1 system.go:158] Port [0] A0, iSCSI, iqn.1988-11.com.dell:01.array.bc305b5dd35b,    10.14.11.201,       Absent, N/A
I0318 15:58:19.678926       1 system.go:158] Port [1] A1, iSCSI, iqn.1988-11.com.dell:01.array.bc305b5dd35b,    10.14.11.202,       Absent, N/A
I0318 15:58:19.678933       1 system.go:158] Port [2] A2, iSCSI, iqn.1988-11.com.dell:01.array.bc305b5dd35b,    10.14.11.203,       Absent, N/A
I0318 15:58:19.678939       1 system.go:158] Port [3] A3, iSCSI, iqn.1988-11.com.dell:01.array.bc305b5dd35b,         0.0.0.0,       Absent, N/A
I0318 15:58:19.678945       1 system.go:158] Port [4] B0, iSCSI, iqn.1988-11.com.dell:01.array.bc305b5dd35b,    10.14.12.201,       Absent, N/A
I0318 15:58:19.678951       1 system.go:158] Port [5] B1, iSCSI, iqn.1988-11.com.dell:01.array.bc305b5dd35b,    10.14.12.202,       Absent, N/A
I0318 15:58:19.678958       1 system.go:158] Port [6] B2, iSCSI, iqn.1988-11.com.dell:01.array.bc305b5dd35b,    10.14.12.203,       Absent, N/A
I0318 15:58:19.678964       1 system.go:158] Port [7] B3, iSCSI, iqn.1988-11.com.dell:01.array.bc305b5dd35b,         0.0.0.0,       Absent, N/A
I0318 15:58:19.678980       1 system.go:162] 
I0318 15:58:19.678986       1 system.go:163] === Pools ===
I0318 15:58:19.678992       1 system.go:165] Pool [0] A             Virtuel   00c0ff6e118100002a06fa6401000000
I0318 15:58:19.679006       1 system.go:165] Pool [1] A             Virtuel   00c0ff6e118100002a06fa6401000000
I0318 15:58:19.679012       1 system.go:168] 
I0318 15:58:19.679019       1 system.go:73] TranslateName(pvc): uuid="8396f42b-f5ff-45a9-8ec0-606b8a80cc46"
I0318 15:58:19.679027       1 system.go:96] TranslateName "pvc-8396f42b-f5ff-45a9-8ec0-606b8a80cc46"[40], prefix "QUA_"[4], result "QUA_42bf5ff45a98ec0606b8a80cc46"[31]
I0318 15:58:19.679038       1 provisioner.go:85] creating volume "QUA_42bf5ff45a98ec0606b8a80cc46" (size 1073741824B) pool "A" [Virtuel] using protocol (iscsi)
I0318 15:58:20.069793       1 provisioner.go:164] created volume QUA_42bf5ff45a98ec0606b8a80cc46##iscsi##600c0ff0006e1181977ff86501000000 (1073741824B)
I0318 15:58:20.069813       1 driver.go:136] === [ROUTINE END] [0] /csi.v1.Controller/CreateVolume (39c08ac72eb8) <11.344845712s> ===
I0318 15:58:26.208715       1 driver.go:125] === [ROUTINE REQUEST] [0] /csi.v1.Controller/ControllerPublishVolume (50ddba8f5dac) <0s> ===
I0318 15:58:26.208737       1 driver.go:132] === [ROUTINE START] [1] /csi.v1.Controller/ControllerPublishVolume (50ddba8f5dac) <891ns> ===
I0318 15:58:26.208788       1 controller.go:250] "using API" address="http://10.14.1.231"
I0318 15:58:26.220789       1 publisher.go:39] "attach request" initiator(s)=["iqn.2004-10.com.ubuntu:01:c441ef15a3e"] volume="QUA_42bf5ff45a98ec0606b8a80cc46"
I0318 15:58:26.267254       1 volumes.go:427] "Get Volume Maps Host Names" hostnames=[] apistatus={"ResponseType":"Success","ResponseTypeNumeric":0,"Response":"Commande exécutée avec succès. (2024-03-18 17:53:33)","ReturnCode":0,"Time":"2024-03-18T17:53:33Z"}
I0318 15:58:26.267286       1 volumes.go:271] "listing all LUN mappings"
I0318 15:58:26.267295       1 volumes.go:236] "++ ShowHostMaps" host="iqn.2004-10.com.ubuntu:01:c441ef15a3e"
I0318 15:58:26.282590       1 volumes.go:448] "using LUN" lun=1
I0318 15:58:26.282609       1 volumes.go:341] "trying to map volume" volume="QUA_42bf5ff45a98ec0606b8a80cc46" initiator="iqn.2004-10.com.ubuntu:01:c441ef15a3e" lun=1
I0318 15:58:29.050190       1 volumes.go:347] "status" ReturnCode=0
I0318 15:58:29.050215       1 volumes.go:456] "successfully mapped" volume="QUA_42bf5ff45a98ec0606b8a80cc46" initiator=["iqn.2004-10.com.ubuntu:01:c441ef15a3e"] lun=1
I0318 15:58:29.050235       1 driver.go:136] === [ROUTINE END] [0] /csi.v1.Controller/ControllerPublishVolume (50ddba8f5dac) <2.841498421s> ===

I used the helm chart default-values, my 3 proxmox nodes are connected directly to my 2 controllers. I checked the connectivity from my VM to the controllers and all seems to be working.

Any help in investigating this issue would be very appreciated.

@seagate-chris
Copy link
Collaborator

I would check the node logs (the CSI Node logs and the /var/log/messages or journalctl logs on the Ubuntu node). Make sure multipath.conf is set up and that the iSCSI initiator is working on that node.

@seagate-chris seagate-chris self-assigned this Mar 19, 2024
@matmitch
Copy link
Author

It was indeed a multipath conf problem, thank you for your insight.
Now the PVC gets mounted properly but is very slow, it takes around 10 minutes to mount a dozen of PVCs. Is it an expected behavior or is there parameters i can set to have faster provisioning ?

cat /etc/multipath.conf 
defaults {
	polling_interval 		2
	user_friendly_names		"no"
        find_multipaths "greedy"
        retain_attached_hw_handler "no"
        disable_changed_wwids "yes"
        path_grouping_policy "group_by_prio"
        path_checker "tur"
        features "0"
        hardware_handler "0"
        prio "alua"
        failback immediate
        rr_weight "uniform"
        no_path_retry 2
}

I pasted bellow the logs of one of my worker nodes during the provisionning:

Mar 19 17:11:20 kube-qualif-worker-02 kernel: EXT4-fs (dm-6): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
Mar 19 17:11:20 kube-qualif-worker-02 systemd[1]: var-lib-kubelet-pods-24ab6da5\x2d25d0\x2d41a4\x2db343\x2da2c88ee8b7d5-volumes-kubernetes.io\x7ecsi-pvc\x2d0e28e119\x2d20a2\x2d4954\x2d9479\x2d2be70589d4be-mount.mount: Deactivated successfully.
Mar 19 17:11:20 kube-qualif-worker-02 multipathd[57867]: 3600c0ff0006e11816fa4f96501000000: removing map by alias
Mar 19 17:11:20 kube-qualif-worker-02 multipath[245492]: dm-6 is not a multipath map
Mar 19 17:11:20 kube-qualif-worker-02 multipathd[57867]: libdevmapper: ioctl/libdm-iface.c(1927): device-mapper: table ioctl on 3600c0ff0006e11816fa4f96501000000  failed: No such device or address
Mar 19 17:11:20 kube-qualif-worker-02 kernel: sd 3:0:0:15: [sdn] Synchronizing SCSI cache
Mar 19 17:11:20 kube-qualif-worker-02 kernel: scsi 3:0:0:15: alua: Detached
Mar 19 17:11:20 kube-qualif-worker-02 kernel: sd 4:0:0:15: [sdo] Synchronizing SCSI cache
Mar 19 17:11:20 kube-qualif-worker-02 kernel: scsi 4:0:0:15: alua: Detached
Mar 19 17:11:20 kube-qualif-worker-02 systemd[1]: Removed slice libcontainer container kubepods-besteffort-pod24ab6da5_25d0_41a4_b343_a2c88ee8b7d5.slice.
Mar 19 17:11:20 kube-qualif-worker-02 systemd[1]: Created slice libcontainer container kubepods-besteffort-podc428affc_4ae2_4e8d_8582_ecd33578c808.slice.
Mar 19 17:11:21 kube-qualif-worker-02 multipathd[57867]: dm-6: devmap not registered, can't remove
Mar 19 17:12:33 kube-qualif-worker-02 kernel: scsi 3:0:0:13: Direct-Access     DellEMC  ME5              I200 PQ: 0 ANSI: 6
Mar 19 17:12:33 kube-qualif-worker-02 kernel: scsi 3:0:0:13: alua: supports implicit TPGS
Mar 19 17:12:33 kube-qualif-worker-02 kernel: scsi 3:0:0:13: alua: device naa.600c0ff0006e11817fa4f96501000000 port group 1 rel port 6
Mar 19 17:12:33 kube-qualif-worker-02 kernel: sd 3:0:0:13: Attached scsi generic sg16 type 0
Mar 19 17:12:33 kube-qualif-worker-02 kernel: sd 3:0:0:13: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatical
Mar 19 17:12:33 kube-qualif-worker-02 kernel: scsi 4:0:0:13: Direct-Access     DellEMC  ME5              I200 PQ: 0 ANSI: 6
Mar 19 17:12:33 kube-qualif-worker-02 kernel: scsi 4:0:0:13: alua: supports implicit TPGS
Mar 19 17:12:33 kube-qualif-worker-02 kernel: scsi 4:0:0:13: alua: device naa.600c0ff0006e11817fa4f96501000000 port group 0 rel port 2
Mar 19 17:12:33 kube-qualif-worker-02 kernel: sd 4:0:0:13: Attached scsi generic sg17 type 0
Mar 19 17:12:33 kube-qualif-worker-02 kernel: sd 4:0:0:13: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatical
Mar 19 17:12:33 kube-qualif-worker-02 kernel: sd 4:0:0:13: alua: transition timeout set to 60 seconds
Mar 19 17:12:33 kube-qualif-worker-02 kernel: sd 4:0:0:13: alua: port group 00 state A preferred supports tolusNA
Mar 19 17:12:33 kube-qualif-worker-02 kernel: sd 3:0:0:13: alua: transition timeout set to 60 seconds
Mar 19 17:12:33 kube-qualif-worker-02 kernel: sd 3:0:0:13: alua: port group 01 state N non-preferred supports tolusNA
Mar 19 17:12:33 kube-qualif-worker-02 kernel: sd 4:0:0:13: [sdo] 10485760 512-byte logical blocks: (5.37 GB/5.00 GiB)
Mar 19 17:12:33 kube-qualif-worker-02 kernel: sd 4:0:0:13: [sdo] 4096-byte physical blocks
Mar 19 17:12:33 kube-qualif-worker-02 kernel: sd 4:0:0:13: [sdo] Write Protect is off
Mar 19 17:12:33 kube-qualif-worker-02 kernel: sd 4:0:0:13: [sdo] Mode Sense: fb 00 00 08
Mar 19 17:12:33 kube-qualif-worker-02 kernel: sd 3:0:0:13: [sdn] 10485760 512-byte logical blocks: (5.37 GB/5.00 GiB)
Mar 19 17:12:33 kube-qualif-worker-02 kernel: sd 3:0:0:13: [sdn] 4096-byte physical blocks
Mar 19 17:12:33 kube-qualif-worker-02 kernel: sd 4:0:0:13: [sdo] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Mar 19 17:12:33 kube-qualif-worker-02 kernel: sd 3:0:0:13: [sdn] Write Protect is off
Mar 19 17:12:33 kube-qualif-worker-02 kernel: sd 3:0:0:13: [sdn] Mode Sense: fb 00 00 08
Mar 19 17:12:33 kube-qualif-worker-02 kernel: sd 4:0:0:13: [sdo] Optimal transfer size 1048576 bytes
Mar 19 17:12:33 kube-qualif-worker-02 kernel: sd 3:0:0:13: [sdn] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Mar 19 17:12:33 kube-qualif-worker-02 kernel: sd 3:0:0:13: [sdn] Optimal transfer size 1048576 bytes
Mar 19 17:12:33 kube-qualif-worker-02 kernel: sd 4:0:0:13: [sdo] Attached SCSI disk
Mar 19 17:12:33 kube-qualif-worker-02 kernel: sd 3:0:0:13: [sdn] Attached SCSI disk
Mar 19 17:12:34 kube-qualif-worker-02 systemd-udevd[246195]: sdo: Process '/usr/bin/unshare -m /usr/bin/snap auto-import --mount=/dev/sdo' failed with exit code 1.
Mar 19 17:12:34 kube-qualif-worker-02 systemd-udevd[246192]: sdn: Process '/usr/bin/unshare -m /usr/bin/snap auto-import --mount=/dev/sdn' failed with exit code 1.
Mar 19 17:12:34 kube-qualif-worker-02 multipathd[57867]: 3600c0ff0006e11817fa4f96501000000: addmap [0 10485760 multipath 1 queue_if_no_path 1 alua 1 1 service-time 0 1 1 8:208 1]
Mar 19 17:12:34 kube-qualif-worker-02 systemd-udevd[246192]: dm-6: Process '/usr/bin/unshare -m /usr/bin/snap auto-import --mount=/dev/dm-6' failed with exit code 1.
Mar 19 17:12:34 kube-qualif-worker-02 systemd[1]: Created slice libcontainer container kubepods-besteffort-podc4f1693c_571f_43a3_bd0b_02fde03a8da6.slice.
Mar 19 17:12:34 kube-qualif-worker-02 multipathd[57867]: sdn [8:208]: path added to devmap 3600c0ff0006e11817fa4f96501000000
Mar 19 17:12:34 kube-qualif-worker-02 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Mar 19 17:12:34 kube-qualif-worker-02 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif8ddf9a33da: link becomes ready
Mar 19 17:12:34 kube-qualif-worker-02 systemd-udevd[246195]: Using default interface naming scheme 'v249'.
Mar 19 17:12:34 kube-qualif-worker-02 networkd-dispatcher[645]: WARNING:Unknown index 81 seen, reloading interface list
Mar 19 17:12:34 kube-qualif-worker-02 systemd-networkd[599]: calif8ddf9a33da: Link UP
Mar 19 17:12:34 kube-qualif-worker-02 systemd-networkd[599]: calif8ddf9a33da: Gained carrier
Mar 19 17:12:34 kube-qualif-worker-02 systemd[1]: Started libcontainer container 4a9e62b065d5e42246b13fd1edaabc31f78c4be35d2d643653a32722f58e2394.
Mar 19 17:12:35 kube-qualif-worker-02 systemd[1]: Started libcontainer container 1e422434cf8a8ee93e56bd59f27799ffcdf2f58ae22242c34cdd45f15b069ef0.
Mar 19 17:12:35 kube-qualif-worker-02 multipathd[57867]: 3600c0ff0006e11817fa4f96501000000: performing delayed actions
Mar 19 17:12:35 kube-qualif-worker-02 multipathd[57867]: 3600c0ff0006e11817fa4f96501000000: reload [0 10485760 multipath 1 queue_if_no_path 1 alua 2 1 service-time 0 1 1 8:224 1 service-time 0 1 1 8:208 1]
Mar 19 17:12:35 kube-qualif-worker-02 systemd[1]: cri-containerd-1e422434cf8a8ee93e56bd59f27799ffcdf2f58ae22242c34cdd45f15b069ef0.scope: Deactivated successfully.
Mar 19 17:12:35 kube-qualif-worker-02 systemd[1]: run-k3s-containerd-io.containerd.runtime.v2.task-k8s.io-1e422434cf8a8ee93e56bd59f27799ffcdf2f58ae22242c34cdd45f15b069ef0-rootfs.mount: Deactivated successfully.
Mar 19 17:12:35 kube-qualif-worker-02 systemd-networkd[599]: calif8ddf9a33da: Gained IPv6LL
Mar 19 17:12:36 kube-qualif-worker-02 systemd[1]: run-k3s-containerd-io.containerd.grpc.v1.cri-sandboxes-4a9e62b065d5e42246b13fd1edaabc31f78c4be35d2d643653a32722f58e2394-shm.mount: Deactivated successfully.
Mar 19 17:12:36 kube-qualif-worker-02 systemd[1]: cri-containerd-4a9e62b065d5e42246b13fd1edaabc31f78c4be35d2d643653a32722f58e2394.scope: Deactivated successfully.
Mar 19 17:12:36 kube-qualif-worker-02 systemd[1]: run-k3s-containerd-io.containerd.runtime.v2.task-k8s.io-4a9e62b065d5e42246b13fd1edaabc31f78c4be35d2d643653a32722f58e2394-rootfs.mount: Deactivated successfully.
Mar 19 17:12:36 kube-qualif-worker-02 systemd-networkd[599]: calif8ddf9a33da: Link DOWN
Mar 19 17:12:36 kube-qualif-worker-02 systemd-networkd[599]: calif8ddf9a33da: Lost carrier
Mar 19 17:12:36 kube-qualif-worker-02 systemd[1]: run-netns-cni\x2de79d417a\x2d85a2\x2d181c\x2d3028\x2db2b6657adf7f.mount: Deactivated successfully.
Mar 19 17:12:36 kube-qualif-worker-02 systemd[1]: var-lib-kubelet-pods-c4f1693c\x2d571f\x2d43a3\x2dbd0b\x2d02fde03a8da6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl7zkh.mount: Deactivated successfully.
Mar 19 17:12:37 kube-qualif-worker-02 systemd[1]: Removed slice libcontainer container kubepods-besteffort-podc4f1693c_571f_43a3_bd0b_02fde03a8da6.slice.
Mar 19 17:13:47 kube-qualif-worker-02 kernel: EXT4-fs (dm-6): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
Mar 19 17:15:00 kube-qualif-worker-02 systemd[1]: Created slice libcontainer container kubepods-besteffort-pod8b5ac9dd_9e59_46a7_83a3_6522683a5138.slice.
Mar 19 17:15:00 kube-qualif-worker-02 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Mar 19 17:15:00 kube-qualif-worker-02 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliac3c78be11a: link becomes ready
Mar 19 17:15:00 kube-qualif-worker-02 systemd-networkd[599]: caliac3c78be11a: Link UP
Mar 19 17:15:00 kube-qualif-worker-02 systemd-networkd[599]: caliac3c78be11a: Gained carrier
Mar 19 17:15:00 kube-qualif-worker-02 networkd-dispatcher[645]: WARNING:Unknown index 82 seen, reloading interface list
Mar 19 17:15:00 kube-qualif-worker-02 systemd-udevd[247903]: Using default interface naming scheme 'v249'.
Mar 19 17:15:00 kube-qualif-worker-02 systemd[1]: Started libcontainer container 825ac73a01dba6ba63a703e7c2853d16cb7c3a0ffb4fc19629b5dfb3b2f1729a.
Mar 19 17:15:01 kube-qualif-worker-02 systemd[1]: Started libcontainer container 01bc63cb42e634ce932f85216ac91183010a763086cf34e5b7e05b32a164f0ec.
Mar 19 17:15:01 kube-qualif-worker-02 systemd[1]: cri-containerd-01bc63cb42e634ce932f85216ac91183010a763086cf34e5b7e05b32a164f0ec.scope: Deactivated successfully.
Mar 19 17:15:01 kube-qualif-worker-02 kernel: scsi 3:0:0:15: Direct-Access     DellEMC  ME5              I200 PQ: 0 ANSI: 6
Mar 19 17:15:01 kube-qualif-worker-02 kernel: scsi 3:0:0:15: alua: supports implicit TPGS
Mar 19 17:15:01 kube-qualif-worker-02 kernel: scsi 3:0:0:15: alua: device naa.600c0ff0006e11816fa4f96501000000 port group 1 rel port 6
Mar 19 17:15:01 kube-qualif-worker-02 kernel: sd 3:0:0:15: Attached scsi generic sg18 type 0
Mar 19 17:15:01 kube-qualif-worker-02 kernel: scsi 4:0:0:15: Direct-Access     DellEMC  ME5              I200 PQ: 0 ANSI: 6
Mar 19 17:15:01 kube-qualif-worker-02 kernel: scsi 4:0:0:15: alua: supports implicit TPGS
Mar 19 17:15:01 kube-qualif-worker-02 kernel: scsi 4:0:0:15: alua: device naa.600c0ff0006e11816fa4f96501000000 port group 0 rel port 2
Mar 19 17:15:01 kube-qualif-worker-02 kernel: sd 4:0:0:15: Attached scsi generic sg19 type 0
Mar 19 17:15:01 kube-qualif-worker-02 kernel: sd 3:0:0:15: [sdp] 10485760 512-byte logical blocks: (5.37 GB/5.00 GiB)
Mar 19 17:15:01 kube-qualif-worker-02 kernel: sd 3:0:0:15: [sdp] 4096-byte physical blocks
Mar 19 17:15:01 kube-qualif-worker-02 kernel: sd 3:0:0:15: [sdp] Write Protect is off
Mar 19 17:15:01 kube-qualif-worker-02 kernel: sd 3:0:0:15: [sdp] Mode Sense: fb 00 00 08
Mar 19 17:15:01 kube-qualif-worker-02 kernel: sd 4:0:0:15: [sdq] 10485760 512-byte logical blocks: (5.37 GB/5.00 GiB)
Mar 19 17:15:01 kube-qualif-worker-02 kernel: sd 4:0:0:15: [sdq] 4096-byte physical blocks
Mar 19 17:15:01 kube-qualif-worker-02 kernel: sd 3:0:0:15: [sdp] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Mar 19 17:15:01 kube-qualif-worker-02 kernel: sd 4:0:0:15: [sdq] Write Protect is off
Mar 19 17:15:01 kube-qualif-worker-02 kernel: sd 4:0:0:15: [sdq] Mode Sense: fb 00 00 08
Mar 19 17:15:01 kube-qualif-worker-02 kernel: sd 4:0:0:15: [sdq] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Mar 19 17:15:01 kube-qualif-worker-02 kernel: sd 3:0:0:15: [sdp] Optimal transfer size 1048576 bytes
Mar 19 17:15:01 kube-qualif-worker-02 kernel: sd 4:0:0:15: [sdq] Optimal transfer size 1048576 bytes
Mar 19 17:15:01 kube-qualif-worker-02 kernel: sd 3:0:0:15: [sdp] Attached SCSI disk
Mar 19 17:15:01 kube-qualif-worker-02 kernel: sd 4:0:0:15: [sdq] Attached SCSI disk
Mar 19 17:15:01 kube-qualif-worker-02 kernel: sd 4:0:0:15: alua: transition timeout set to 60 seconds
Mar 19 17:15:01 kube-qualif-worker-02 kernel: sd 4:0:0:15: alua: port group 00 state A preferred supports tolusNA
Mar 19 17:15:01 kube-qualif-worker-02 kernel: sd 3:0:0:15: alua: transition timeout set to 60 seconds
Mar 19 17:15:01 kube-qualif-worker-02 kernel: sd 3:0:0:15: alua: port group 01 state N non-preferred supports tolusNA
Mar 19 17:15:01 kube-qualif-worker-02 systemd-udevd[248046]: sdp: Process '/usr/bin/unshare -m /usr/bin/snap auto-import --mount=/dev/sdp' failed with exit code 1.
Mar 19 17:15:01 kube-qualif-worker-02 systemd-udevd[248050]: sdq: Process '/usr/bin/unshare -m /usr/bin/snap auto-import --mount=/dev/sdq' failed with exit code 1.
Mar 19 17:15:01 kube-qualif-worker-02 multipathd[57867]: 3600c0ff0006e11816fa4f96501000000: addmap [0 10485760 multipath 1 queue_if_no_path 1 alua 1 1 service-time 0 1 1 8:240 1]
Mar 19 17:15:01 kube-qualif-worker-02 systemd-udevd[248039]: dm-7: Process '/usr/bin/unshare -m /usr/bin/snap auto-import --mount=/dev/dm-7' failed with exit code 1.
Mar 19 17:15:01 kube-qualif-worker-02 multipathd[57867]: sdp [8:240]: path added to devmap 3600c0ff0006e11816fa4f96501000000
Mar 19 17:15:01 kube-qualif-worker-02 systemd-networkd[599]: caliac3c78be11a: Gained IPv6LL
Mar 19 17:15:02 kube-qualif-worker-02 multipathd[57867]: 3600c0ff0006e11816fa4f96501000000: performing delayed actions
Mar 19 17:15:02 kube-qualif-worker-02 multipathd[57867]: 3600c0ff0006e11816fa4f96501000000: reload [0 10485760 multipath 1 queue_if_no_path 1 alua 2 1 service-time 0 1 1 65:0 1 service-time 0 1 1 8:240 1]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants