You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been experiencing an issue with my vSphere CSI Node pods.
They continuously enter a CrashLoopBackOff state and won't resume normal operation until I perform a physical restart of the kubernetes node VM.
So after restarting VM with cube-worker-storage-01-test-dev.dc the vsphere-csi-node-4bp6l is running successfully
vmware-system-csi/vsphere-csi-controller-86dffc5954-7x592:csi-snapshotter
E0221 02:21:27.124861 1 reflector.go:140] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSn │
│ apshotContent: failed to list *v1.VolumeSnapshotContent: the server could not find the requested resource (get volumesnapshotcontents.snapshot.storage.k8s.io) │
│ I0221 02:22:00.049524 1 reflector.go:257] Listing and watching *v1.VolumeSnapshotContent from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalv │
│ ersions/factory.go:117 │
│ W0221 02:22:00.050445 1 reflector.go:424] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117: failed to list *v1.VolumeSna │
│ pshotContent: the server could not find the requested resource (get volumesnapshotcontents.snapshot.storage.k8s.io) │
│ E0221 02:22:00.050479 1 reflector.go:140] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSn │
│ apshotContent: failed to list *v1.VolumeSnapshotContent: the server could not find the requested resource (get volumesnapshotcontents.snapshot.storage.k8s.io) │
│ I0221 02:22:09.516316 1 reflector.go:257] Listing and watching *v1.VolumeSnapshotClass from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalver │
│ sions/factory.go:117 │
│ W0221 02:22:09.517518 1 reflector.go:424] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117: failed to list *v1.VolumeSna │
│ pshotClass: the server could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io) │
│ E0221 02:22:09.517594 1 reflector.go:140] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSn │
│ apshotClass: failed to list *v1.VolumeSnapshotClass: the server could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io) │
│ I0221 02:22:41.746761 1 reflector.go:257] Listing and watching *v1.VolumeSnapshotClass from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalver │
│ sions/factory.go:117 │
│ W0221 02:22:41.748337 1 reflector.go:424] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117: failed to list *v1.VolumeSna │
│ pshotClass: the server could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io) │
│ E0221 02:22:41.748452 1 reflector.go:140] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSn │
│ apshotClass: failed to list *v1.VolumeSnapshotClass: the server could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io) │
│ I0221 02:22:59.358079 1 reflector.go:257] Listing and watching *v1.VolumeSnapshotContent from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalv │
│ ersions/factory.go:117 │
│ W0221 02:22:59.359454 1 reflector.go:424] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117: failed to list *v1.VolumeSna │
│ pshotContent: the server could not find the requested resource (get volumesnapshotcontents.snapshot.storage.k8s.io) │
│ E0221 02:22:59.359494 1 reflector.go:140] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSn │
│ apshotContent: failed to list *v1.VolumeSnapshotContent: the server could not find the requested resource (get volumesnapshotcontents.snapshot.storage.k8s.io) │
│ I0221 02:23:18.421078 1 reflector.go:257] Listing and watching *v1.VolumeSnapshotClass from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalver │
│ sions/factory.go:117 │
│ W0221 02:23:18.443521 1 reflector.go:424] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117: failed to list *v1.VolumeSna │
│ pshotClass: the server could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io) │
│ E0221 02:23:18.443553 1 reflector.go:140] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSn │
│ apshotClass: failed to list *v1.VolumeSnapshotClass: the server could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io) │
│ I0221 02:23:49.792646 1 reflector.go:257] Listing and watching *v1.VolumeSnapshotContent from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalv │
│ ersions/factory.go:117 │
│ W0221 02:23:49.794084 1 reflector.go:424] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117: failed to list *v1.VolumeSna │
│ pshotContent: the server could not find the requested resource (get volumesnapshotcontents.snapshot.storage.k8s.io) │
│ E0221 02:23:49.794153 1 reflector.go:140] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSn │
│ apshotContent: failed to list *v1.VolumeSnapshotContent: the server could not find the requested resource (get volumesnapshotcontents.snapshot.storage.k8s.io) │
│ I0221 02:23:55.690022 1 reflector.go:257] Listing and watching *v1.VolumeSnapshotClass from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalver │
│ sions/factory.go:117 │
│ W0221 02:23:55.691698 1 reflector.go:424] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117: failed to list *v1.VolumeSna │
│ pshotClass: the server could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io) │
│ E0221 02:23:55.691746 1 reflector.go:140] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSn │
│ apshotClass: failed to list *v1.VolumeSnapshotClass: the server could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io) │
$ k logs vsphere-csi-node-4bp6l -c node-driver-registrar
I0221 02:04:18.151443 1 main.go:167] Version: v2.8.0
I0221 02:04:18.151658 1 main.go:168] Running node-driver-registrar in mode=registration
I0221 02:04:18.152643 1 main.go:192] Attempting to open a gRPC connection with: "/csi/csi.sock"
I0221 02:04:18.152739 1 connection.go:164] Connecting to unix:///csi/csi.sock
I0221 02:04:19.153935 1 main.go:199] Calling CSI driver to discover driver name
I0221 02:04:19.154064 1 connection.go:193] GRPC call: /csi.v1.Identity/GetPluginInfo
I0221 02:04:19.154079 1 connection.go:194] GRPC request: {}
I0221 02:04:19.160126 1 connection.go:200] GRPC response: {"name":"csi.vsphere.vmware.com","vendor_version":"v3.1.2"}
I0221 02:04:19.160147 1 connection.go:201] GRPC error: <nil>
I0221 02:04:19.160158 1 main.go:209] CSI driver name: "csi.vsphere.vmware.com"
I0221 02:04:19.160320 1 node_register.go:53] Starting Registration Server at: /registration/csi.vsphere.vmware.com-reg.sock
I0221 02:04:19.160859 1 node_register.go:62] Registration Server started at: /registration/csi.vsphere.vmware.com-reg.sock
I0221 02:04:19.161213 1 node_register.go:92] Skipping HTTP server because endpoint is set to: ""
I0221 02:04:19.230619 1 main.go:102] Received GetInfo call: &InfoRequest{}
I0221 02:04:19.231017 1 main.go:109] "Kubelet registration probe created" path="/var/lib/kubelet/plugins/csi.vsphere.vmware.com/registration"
I0221 02:05:02.263756 1 main.go:121] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:true,Error:,}
$ k logs vsphere-csi-node-4bp6l -c vsphere-csi-node
{"level":"info","time":"2024-02-21T02:04:18.870125832Z","caller":"logger/logger.go:41","msg":"Setting default log level to :\"PRODUCTION\""}
{"level":"info","time":"2024-02-21T02:04:18.871462547Z","caller":"vsphere-csi/main.go:56","msg":"Version : v3.1.2","TraceId":"2f80c44f-34cf-487e-a321-ef5e3504b79c"}
{"level":"info","time":"2024-02-21T02:04:18.871615169Z","caller":"vsphere-csi/main.go:73","msg":"Enable logging off for vCenter sessions on exit","TraceId":"2f80c44f-34cf-487e-a321-ef5e3504b79c"}
{"level":"info","time":"2024-02-21T02:04:18.875357793Z","caller":"logger/logger.go:41","msg":"Setting default log level to :\"PRODUCTION\""}
{"level":"info","time":"2024-02-21T02:04:18.875397419Z","caller":"k8sorchestrator/k8sorchestrator.go:251","msg":"Initializing k8sOrchestratorInstance","TraceId":"2f80c44f-34cf-487e-a321-ef5e3504b79c","TraceId":"aaa020c6-47b8-4d48-a4da-e220da2a9961"}
{"level":"info","time":"2024-02-21T02:04:18.875435173Z","caller":"kubernetes/kubernetes.go:85","msg":"k8s client using in-cluster config","TraceId":"2f80c44f-34cf-487e-a321-ef5e3504b79c","TraceId":"aaa020c6-47b8-4d48-a4da-e220da2a9961"}
{"level":"info","time":"2024-02-21T02:04:18.875590163Z","caller":"kubernetes/kubernetes.go:382","msg":"Setting client QPS to 100.000000 and Burst to 100.","TraceId":"2f80c44f-34cf-487e-a321-ef5e3504b79c","TraceId":"aaa020c6-47b8-4d48-a4da-e220da2a9961"}
{"level":"info","time":"2024-02-21T02:04:18.875837987Z","caller":"kubernetes/kubernetes.go:85","msg":"k8s client using in-cluster config","TraceId":"2f80c44f-34cf-487e-a321-ef5e3504b79c","TraceId":"aaa020c6-47b8-4d48-a4da-e220da2a9961"}
{"level":"info","time":"2024-02-21T02:04:18.875906838Z","caller":"kubernetes/kubernetes.go:382","msg":"Setting client QPS to 100.000000 and Burst to 100.","TraceId":"2f80c44f-34cf-487e-a321-ef5e3504b79c","TraceId":"aaa020c6-47b8-4d48-a4da-e220da2a9961"}
{"level":"info","time":"2024-02-21T02:04:18.876442048Z","caller":"kubernetes/informers.go:85","msg":"Created new informer factory for in-cluster client","TraceId":"2f80c44f-34cf-487e-a321-ef5e3504b79c","TraceId":"aaa020c6-47b8-4d48-a4da-e220da2a9961"}
{"level":"info","time":"2024-02-21T02:04:18.89652524Z","caller":"k8sorchestrator/k8sorchestrator.go:407","msg":"New internal feature states values stored successfully: map[async-query-volume:true block-volume-snapshot:true cnsmgr-suspend-create-volume:true csi-auth-check:true csi-internal-generated-cluster-id:true csi-migration:true csi-windows-support:true list-volumes:true listview-tasks:true max-pvscsi-targets-per-vm:true multi-vcenter-csi-topology:true online-volume-extend:true pv-to-backingdiskobjectid-mapping:false topology-preferential-datastores:true trigger-csi-fullsync:false]","TraceId":"2f80c44f-34cf-487e-a321-ef5e3504b79c","TraceId":"aaa020c6-47b8-4d48-a4da-e220da2a9961"}
{"level":"info","time":"2024-02-21T02:04:18.897245174Z","caller":"k8sorchestrator/k8sorchestrator.go:322","msg":"k8sOrchestratorInstance initialized","TraceId":"2f80c44f-34cf-487e-a321-ef5e3504b79c","TraceId":"aaa020c6-47b8-4d48-a4da-e220da2a9961"}
{"level":"info","time":"2024-02-21T02:04:18.899824637Z","caller":"k8sorchestrator/k8sorchestrator.go:647","msg":"configMapAdded: Internal feature state values from \"internal-feature-states.csi.vsphere.vmware.com\" stored successfully: map[async-query-volume:true block-volume-snapshot:true cnsmgr-suspend-create-volume:true csi-auth-check:true csi-internal-generated-cluster-id:true csi-migration:true csi-windows-support:true list-volumes:true listview-tasks:true max-pvscsi-targets-per-vm:true multi-vcenter-csi-topology:true online-volume-extend:true pv-to-backingdiskobjectid-mapping:false topology-preferential-datastores:true trigger-csi-fullsync:false]","TraceId":"0ce80253-1a6b-4584-833a-8582b7d6cac1"}
{"level":"info","time":"2024-02-21T02:04:18.904608305Z","caller":"service/driver.go:109","msg":"Configured: \"csi.vsphere.vmware.com\" with clusterFlavor: \"VANILLA\" and mode: \"node\"","TraceId":"2f80c44f-34cf-487e-a321-ef5e3504b79c","TraceId":"aaa020c6-47b8-4d48-a4da-e220da2a9961"}
{"level":"info","time":"2024-02-21T02:04:18.914383661Z","caller":"service/server.go:131","msg":"identity service registered"}
{"level":"info","time":"2024-02-21T02:04:18.91442537Z","caller":"service/server.go:146","msg":"node service registered"}
{"level":"info","time":"2024-02-21T02:04:18.914434854Z","caller":"service/server.go:152","msg":"Listening for connections on address: /csi/csi.sock"}
{"level":"info","time":"2024-02-21T02:04:19.232792467Z","caller":"service/node.go:338","msg":"NodeGetInfo: called with args {XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}","TraceId":"1778ac26-3f84-4bcf-9203-9e40c4ef4e5b"}
{"level":"info","time":"2024-02-21T02:04:19.233070779Z","caller":"osutils/linux_os_utils.go:859","msg":"UUID is 42373709-336f-0c5e-7844-d2735763426c","TraceId":"1778ac26-3f84-4bcf-9203-9e40c4ef4e5b"}
{"level":"info","time":"2024-02-21T02:04:19.233104504Z","caller":"service/node.go:387","msg":"NodeGetInfo: MAX_VOLUMES_PER_NODE is set to 59","TraceId":"1778ac26-3f84-4bcf-9203-9e40c4ef4e5b"}
{"level":"info","time":"2024-02-21T02:04:19.233116602Z","caller":"kubernetes/kubernetes.go:85","msg":"k8s client using in-cluster config","TraceId":"1778ac26-3f84-4bcf-9203-9e40c4ef4e5b"}
{"level":"info","time":"2024-02-21T02:04:19.233287809Z","caller":"kubernetes/kubernetes.go:382","msg":"Setting client QPS to 100.000000 and Burst to 100.","TraceId":"1778ac26-3f84-4bcf-9203-9e40c4ef4e5b"}
{"level":"info","time":"2024-02-21T02:04:19.233303644Z","caller":"kubernetes/kubernetes.go:85","msg":"k8s client using in-cluster config","TraceId":"1778ac26-3f84-4bcf-9203-9e40c4ef4e5b"}
{"level":"info","time":"2024-02-21T02:04:19.233397509Z","caller":"kubernetes/kubernetes.go:382","msg":"Setting client QPS to 100.000000 and Burst to 100.","TraceId":"1778ac26-3f84-4bcf-9203-9e40c4ef4e5b"}
{"level":"info","time":"2024-02-21T02:04:19.244131026Z","caller":"k8sorchestrator/topology.go:713","msg":"Topology service initiated successfully","TraceId":"1778ac26-3f84-4bcf-9203-9e40c4ef4e5b"}
{"level":"info","time":"2024-02-21T02:04:19.253108028Z","caller":"k8sorchestrator/topology.go:763","msg":"Successfully patched CSINodeTopology instance: \"cube-worker-storage-01-test-dev.ABC.dc\" with Uuid: \"42373709-336f-0c5e-7844-d2735763426c\"","TraceId":"1778ac26-3f84-4bcf-9203-9e40c4ef4e5b"}
{"level":"info","time":"2024-02-21T02:04:19.253201187Z","caller":"k8sorchestrator/topology.go:979","msg":"Timeout is set to 1 minute(s)","TraceId":"1778ac26-3f84-4bcf-9203-9e40c4ef4e5b"}
{"level":"info","time":"2024-02-21T02:05:02.243902897Z","caller":"service/node.go:446","msg":"NodeGetInfo response: node_id:\"42373709-336f-0c5e-7844-d2735763426c\" max_volumes_per_node:59 accessible_topology:<> ","TraceId":"1778ac26-3f84-4bcf-9203-9e40c4ef4e5b"}
$ k logs --previous vsphere-csi-node-x4j5m -c node-driver-registrar
I0221 02:04:18.078240 1 main.go:167] Version: v2.8.0
I0221 02:04:18.078342 1 main.go:168] Running node-driver-registrar in mode=registration
I0221 02:04:18.079404 1 main.go:192] Attempting to open a gRPC connection with: "/csi/csi.sock"
I0221 02:04:18.079434 1 connection.go:164] Connecting to unix:///csi/csi.sock
I0221 02:04:19.083501 1 main.go:199] Calling CSI driver to discover driver name
I0221 02:04:19.083529 1 connection.go:193] GRPC call: /csi.v1.Identity/GetPluginInfo
I0221 02:04:19.083546 1 connection.go:194] GRPC request: {}
I0221 02:04:19.092767 1 connection.go:200] GRPC response: {"name":"csi.vsphere.vmware.com","vendor_version":"v3.1.2"}
I0221 02:04:19.092782 1 connection.go:201] GRPC error: <nil>
I0221 02:04:19.092792 1 main.go:209] CSI driver name: "csi.vsphere.vmware.com"
I0221 02:04:19.092818 1 node_register.go:53] Starting Registration Server at: /registration/csi.vsphere.vmware.com-reg.sock
I0221 02:04:19.093124 1 node_register.go:62] Registration Server started at: /registration/csi.vsphere.vmware.com-reg.sock
I0221 02:04:19.093304 1 node_register.go:92] Skipping HTTP server because endpoint is set to: ""
I0221 02:04:20.065611 1 main.go:102] Received GetInfo call: &InfoRequest{}
I0221 02:04:20.065979 1 main.go:109] "Kubelet registration probe created" path="/var/lib/kubelet/plugins/csi.vsphere.vmware.com/registration"
I0221 02:04:47.876326 1 main.go:121] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:false,Error:RegisterPlugin error -- plugin registration failed with err: rpc error: code = Unavailable desc = error reading from server: EOF,}
E0221 02:04:47.876399 1 main.go:123] Registration process failed with error: RegisterPlugin error -- plugin registration failed with err: rpc error: code = Unavailable desc = error reading from server: EOF, restarting registration container.
k logs --previous vsphere-csi-node-x4j5m -c vsphere-csi-node
{"level":"info","time":"2024-02-21T02:24:38.594167579Z","caller":"logger/logger.go:41","msg":"Setting default log level to :\"PRODUCTION\""}
{"level":"info","time":"2024-02-21T02:24:38.594605629Z","caller":"vsphere-csi/main.go:56","msg":"Version : v3.1.2","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e"}
{"level":"info","time":"2024-02-21T02:24:38.594729446Z","caller":"vsphere-csi/main.go:73","msg":"Enable logging off for vCenter sessions on exit","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e"}
{"level":"info","time":"2024-02-21T02:24:38.595165743Z","caller":"logger/logger.go:41","msg":"Setting default log level to :\"PRODUCTION\""}
{"level":"info","time":"2024-02-21T02:24:38.595441566Z","caller":"k8sorchestrator/k8sorchestrator.go:251","msg":"Initializing k8sOrchestratorInstance","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e","TraceId":"92e5ea38-cbcf-449d-94e4-dbd4b285aae5"}
{"level":"info","time":"2024-02-21T02:24:38.595538142Z","caller":"kubernetes/kubernetes.go:85","msg":"k8s client using in-cluster config","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e","TraceId":"92e5ea38-cbcf-449d-94e4-dbd4b285aae5"}
{"level":"info","time":"2024-02-21T02:24:38.596149272Z","caller":"kubernetes/kubernetes.go:382","msg":"Setting client QPS to 100.000000 and Burst to 100.","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e","TraceId":"92e5ea38-cbcf-449d-94e4-dbd4b285aae5"}
{"level":"info","time":"2024-02-21T02:24:38.596975244Z","caller":"kubernetes/kubernetes.go:85","msg":"k8s client using in-cluster config","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e","TraceId":"92e5ea38-cbcf-449d-94e4-dbd4b285aae5"}
{"level":"info","time":"2024-02-21T02:24:38.597420818Z","caller":"kubernetes/kubernetes.go:382","msg":"Setting client QPS to 100.000000 and Burst to 100.","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e","TraceId":"92e5ea38-cbcf-449d-94e4-dbd4b285aae5"}
{"level":"info","time":"2024-02-21T02:24:38.597767769Z","caller":"kubernetes/informers.go:85","msg":"Created new informer factory for in-cluster client","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e","TraceId":"92e5ea38-cbcf-449d-94e4-dbd4b285aae5"}
{"level":"info","time":"2024-02-21T02:24:38.606979186Z","caller":"k8sorchestrator/k8sorchestrator.go:407","msg":"New internal feature states values stored successfully: map[async-query-volume:true block-volume-snapshot:true cnsmgr-suspend-create-volume:true csi-auth-check:true csi-internal-generated-cluster-id:true csi-migration:true csi-windows-support:true list-volumes:true listview-tasks:true max-pvscsi-targets-per-vm:true multi-vcenter-csi-topology:true online-volume-extend:true pv-to-backingdiskobjectid-mapping:false topology-preferential-datastores:true trigger-csi-fullsync:false]","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e","TraceId":"92e5ea38-cbcf-449d-94e4-dbd4b285aae5"}
{"level":"info","time":"2024-02-21T02:24:38.607141745Z","caller":"k8sorchestrator/k8sorchestrator.go:322","msg":"k8sOrchestratorInstance initialized","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e","TraceId":"92e5ea38-cbcf-449d-94e4-dbd4b285aae5"}
{"level":"info","time":"2024-02-21T02:24:38.610823052Z","caller":"k8sorchestrator/k8sorchestrator.go:647","msg":"configMapAdded: Internal feature state values from \"internal-feature-states.csi.vsphere.vmware.com\" stored successfully: map[async-query-volume:true block-volume-snapshot:true cnsmgr-suspend-create-volume:true csi-auth-check:true csi-internal-generated-cluster-id:true csi-migration:true csi-windows-support:true list-volumes:true listview-tasks:true max-pvscsi-targets-per-vm:true multi-vcenter-csi-topology:true online-volume-extend:true pv-to-backingdiskobjectid-mapping:false topology-preferential-datastores:true trigger-csi-fullsync:false]","TraceId":"c0304904-850f-48b6-8333-43e622ffd93e"}
{"level":"info","time":"2024-02-21T02:24:38.611292887Z","caller":"service/driver.go:109","msg":"Configured: \"csi.vsphere.vmware.com\" with clusterFlavor: \"VANILLA\" and mode: \"node\"","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e","TraceId":"92e5ea38-cbcf-449d-94e4-dbd4b285aae5"}
{"level":"info","time":"2024-02-21T02:24:38.611645216Z","caller":"service/server.go:131","msg":"identity service registered"}
{"level":"info","time":"2024-02-21T02:24:38.611682199Z","caller":"service/server.go:146","msg":"node service registered"}
{"level":"info","time":"2024-02-21T02:24:38.611699282Z","caller":"service/server.go:152","msg":"Listening for connections on address: /csi/csi.sock"}
{"level":"info","time":"2024-02-21T02:25:07.861713443Z","caller":"vsphere-csi/main.go:88","msg":"SIGTERM signal received","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e"}
{"level":"info","time":"2024-02-21T02:25:07.861769196Z","caller":"utils/utils.go:230","msg":"Logging out all vCenter sessions","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e"}
{"level":"info","time":"2024-02-21T02:25:07.861793774Z","caller":"vsphere/virtualcentermanager.go:74","msg":"Initializing defaultVirtualCenterManager...","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e"}
{"level":"info","time":"2024-02-21T02:25:07.861814376Z","caller":"vsphere/virtualcentermanager.go:76","msg":"Successfully initialized defaultVirtualCenterManager","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e"}
{"level":"info","time":"2024-02-21T02:25:07.861820962Z","caller":"utils/utils.go:256","msg":"Successfully logged out vCenter sessions","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e"}
Any advice or suggestions regarding this issue would be greatly appreciated.
Thank you in advance for your help.
The text was updated successfully, but these errors were encountered:
Hello,
I've been experiencing an issue with my vSphere CSI Node pods.
They continuously enter a CrashLoopBackOff state and won't resume normal operation until I perform a physical restart of the kubernetes node VM.
So after restarting VM with
cube-worker-storage-01-test-dev.dc
thevsphere-csi-node-4bp6l
is running successfullyvsphere-csi-driver: 3.1.2 - as is https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/v3.1.2/manifests/vanilla/vsphere-csi-driver.yaml
k8s: v1.27
vSphere: 7.0.2
Compatibility: ESXi 7.0 U2 and later (VM version 19)
Here is the relevant pod data:
$ k get nodes
$ k get pods
I've also gathered the logs from each pod, but haven't been able to identify the cause of this issue.
vmware-system-csi/vsphere-csi-controller-86dffc5954-7x592:csi-attacher
vmware-system-csi/vsphere-csi-controller-86dffc5954-7x592:csi-provisioner
vmware-system-csi/vsphere-csi-controller-86dffc5954-7x592:csi-resizer
vmware-system-csi/vsphere-csi-controller-86dffc5954-7x592:csi-snapshotter
vmware-system-csi/vsphere-csi-controller-86dffc5954-7x592:vsphere-csi-controller
$ k logs vsphere-csi-node-4bp6l -c node-driver-registrar
$ k logs vsphere-csi-node-4bp6l -c vsphere-csi-node
$ k logs --previous vsphere-csi-node-x4j5m -c node-driver-registrar
k logs --previous vsphere-csi-node-x4j5m -c vsphere-csi-node
Any advice or suggestions regarding this issue would be greatly appreciated.
Thank you in advance for your help.
The text was updated successfully, but these errors were encountered: