Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vSphere CSI Node Pods Consistently CrashLoopBackOff #2802

Open
sunqtronaut opened this issue Feb 21, 2024 · 1 comment
Open

vSphere CSI Node Pods Consistently CrashLoopBackOff #2802

sunqtronaut opened this issue Feb 21, 2024 · 1 comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@sunqtronaut
Copy link

sunqtronaut commented Feb 21, 2024

Hello,

I've been experiencing an issue with my vSphere CSI Node pods.
They continuously enter a CrashLoopBackOff state and won't resume normal operation until I perform a physical restart of the kubernetes node VM.
So after restarting VM with cube-worker-storage-01-test-dev.dc the vsphere-csi-node-4bp6l is running successfully

vsphere-csi-driver: 3.1.2 - as is https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/v3.1.2/manifests/vanilla/vsphere-csi-driver.yaml
k8s: v1.27
vSphere: 7.0.2
Compatibility: ESXi 7.0 U2 and later (VM version 19)

Here is the relevant pod data:

$ k get nodes

NAME                                           STATUS   ROLES           AGE   VERSION
controlplane-01-test-dev.dc          Ready    control-plane   18h   v1.27.11
cube-worker-01-test-dev.dc           Ready    worker          18h   v1.27.11
cube-worker-02-test-dev.dc           Ready    worker          18h   v1.27.11
cube-worker-storage-01-test-dev.dc   Ready    worker          18h   v1.27.11
cube-worker-storage-02-test-dev.dc   Ready    worker          18h   v1.27.11

$ k get pods

NAME                                      READY   STATUS             RESTARTS         AGE
vsphere-csi-controller-86dffc5954-7x592   7/7     Running            0                24m
vsphere-csi-controller-86dffc5954-dzmkj   0/7     Pending            0                24m
vsphere-csi-controller-86dffc5954-kb86d   0/7     Pending            0                24m
vsphere-csi-node-4bp6l                    3/3     Running            0                24m
vsphere-csi-node-jw2f6                    2/3     CrashLoopBackOff   12 (4m37s ago)   24m
vsphere-csi-node-nwqk4                    2/3     CrashLoopBackOff   12 (3m41s ago)   24m
vsphere-csi-node-x4j5m                    2/3     CrashLoopBackOff   12 (3m52s ago)   24m
vsphere-csi-node-z2qj5                    2/3     CrashLoopBackOff   12 (3m37s ago)   24m

I've also gathered the logs from each pod, but haven't been able to identify the cause of this issue.

vmware-system-csi/vsphere-csi-controller-86dffc5954-7x592:csi-attacher

| I0221 02:16:59.636124       1 reflector.go:376] k8s.io/client-go/informers/factory.go:150: forcing resync                                                                        │
│ I0221 02:16:59.636171       1 reflector.go:376] k8s.io/client-go/informers/factory.go:150: forcing resync                                                                        │
│ I0221 02:16:59.952569       1 csi_handler.go:123] Reconciling VolumeAttachments with driver backend state                                                                        │
│ I0221 02:17:59.975203       1 csi_handler.go:123] Reconciling VolumeAttachments with driver backend state                                                                        │
│ I0221 02:18:06.632557       1 reflector.go:788] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.PersistentVolume total 0 items received                             │
│ I0221 02:18:59.996124       1 csi_handler.go:123] Reconciling VolumeAttachments with driver backend state                                                                        │
│ I0221 02:20:00.021533       1 csi_handler.go:123] Reconciling VolumeAttachments with driver backend state                                                                        │
│ I0221 02:21:00.047508       1 csi_handler.go:123] Reconciling VolumeAttachments with driver backend state

vmware-system-csi/vsphere-csi-controller-86dffc5954-7x592:csi-provisioner

| I0221 02:19:50.036857       1 reflector.go:788] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.StorageClass total 4 items received                                 │
│ I0221 02:20:37.134571       1 reflector.go:788] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: Watch close - *v1.PersistentVolume total 0 ite │
│ ms received                                                                                                                                                                      │
│ I0221 02:20:39.037142       1 reflector.go:788] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.PersistentVolumeClaim total 0 items received                        │
│ I0221 02:21:42.032381       1 reflector.go:376] k8s.io/client-go/informers/factory.go:150: forcing resync                                                                        │
│ I0221 02:21:42.132789       1 reflector.go:376] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: forcing resync                                 │
│                                                                                                                                                                                  │

vmware-system-csi/vsphere-csi-controller-86dffc5954-7x592:csi-resizer

| I0221 02:21:14.271833       1 reflector.go:788] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.PersistentVolumeClaim total 0 items received                        │
│ I0221 02:21:33.272688       1 reflector.go:788] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.PersistentVolume total 0 items received                             │
│                                                                                                                                                                                  │

vmware-system-csi/vsphere-csi-controller-86dffc5954-7x592:csi-snapshotter

vmware-system-csi/vsphere-csi-controller-86dffc5954-7x592:csi-snapshotter
E0221 02:21:27.124861       1 reflector.go:140] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSn │
│ apshotContent: failed to list *v1.VolumeSnapshotContent: the server could not find the requested resource (get volumesnapshotcontents.snapshot.storage.k8s.io)                   │
│ I0221 02:22:00.049524       1 reflector.go:257] Listing and watching *v1.VolumeSnapshotContent from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalv │
│ ersions/factory.go:117                                                                                                                                                           │
│ W0221 02:22:00.050445       1 reflector.go:424] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117: failed to list *v1.VolumeSna │
│ pshotContent: the server could not find the requested resource (get volumesnapshotcontents.snapshot.storage.k8s.io)                                                              │
│ E0221 02:22:00.050479       1 reflector.go:140] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSn │
│ apshotContent: failed to list *v1.VolumeSnapshotContent: the server could not find the requested resource (get volumesnapshotcontents.snapshot.storage.k8s.io)                   │
│ I0221 02:22:09.516316       1 reflector.go:257] Listing and watching *v1.VolumeSnapshotClass from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalver │
│ sions/factory.go:117                                                                                                                                                             │
│ W0221 02:22:09.517518       1 reflector.go:424] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117: failed to list *v1.VolumeSna │
│ pshotClass: the server could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io)                                                                 │
│ E0221 02:22:09.517594       1 reflector.go:140] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSn │
│ apshotClass: failed to list *v1.VolumeSnapshotClass: the server could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io)                        │
│ I0221 02:22:41.746761       1 reflector.go:257] Listing and watching *v1.VolumeSnapshotClass from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalver │
│ sions/factory.go:117                                                                                                                                                             │
│ W0221 02:22:41.748337       1 reflector.go:424] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117: failed to list *v1.VolumeSna │
│ pshotClass: the server could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io)                                                                 │
│ E0221 02:22:41.748452       1 reflector.go:140] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSn │
│ apshotClass: failed to list *v1.VolumeSnapshotClass: the server could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io)                        │
│ I0221 02:22:59.358079       1 reflector.go:257] Listing and watching *v1.VolumeSnapshotContent from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalv │
│ ersions/factory.go:117                                                                                                                                                           │
│ W0221 02:22:59.359454       1 reflector.go:424] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117: failed to list *v1.VolumeSna │
│ pshotContent: the server could not find the requested resource (get volumesnapshotcontents.snapshot.storage.k8s.io)                                                              │
│ E0221 02:22:59.359494       1 reflector.go:140] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSn │
│ apshotContent: failed to list *v1.VolumeSnapshotContent: the server could not find the requested resource (get volumesnapshotcontents.snapshot.storage.k8s.io)                   │
│ I0221 02:23:18.421078       1 reflector.go:257] Listing and watching *v1.VolumeSnapshotClass from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalver │
│ sions/factory.go:117                                                                                                                                                             │
│ W0221 02:23:18.443521       1 reflector.go:424] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117: failed to list *v1.VolumeSna │
│ pshotClass: the server could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io)                                                                 │
│ E0221 02:23:18.443553       1 reflector.go:140] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSn │
│ apshotClass: failed to list *v1.VolumeSnapshotClass: the server could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io)                        │
│ I0221 02:23:49.792646       1 reflector.go:257] Listing and watching *v1.VolumeSnapshotContent from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalv │
│ ersions/factory.go:117                                                                                                                                                           │
│ W0221 02:23:49.794084       1 reflector.go:424] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117: failed to list *v1.VolumeSna │
│ pshotContent: the server could not find the requested resource (get volumesnapshotcontents.snapshot.storage.k8s.io)                                                              │
│ E0221 02:23:49.794153       1 reflector.go:140] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSn │
│ apshotContent: failed to list *v1.VolumeSnapshotContent: the server could not find the requested resource (get volumesnapshotcontents.snapshot.storage.k8s.io)                   │
│ I0221 02:23:55.690022       1 reflector.go:257] Listing and watching *v1.VolumeSnapshotClass from github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalver │
│ sions/factory.go:117                                                                                                                                                             │
│ W0221 02:23:55.691698       1 reflector.go:424] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117: failed to list *v1.VolumeSna │
│ pshotClass: the server could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io)                                                                 │
│ E0221 02:23:55.691746       1 reflector.go:140] github.com/kubernetes-csi/external-snapshotter/client/v6/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSn │
│ apshotClass: failed to list *v1.VolumeSnapshotClass: the server could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io)                        │

vmware-system-csi/vsphere-csi-controller-86dffc5954-7x592:vsphere-csi-controller

| {"level":"info","time":"2024-02-21T02:19:20.384791157Z","caller":"k8sorchestrator/topology.go:230","msg":"Refreshing preferred datastores information...","TraceId":"e4e5d712-53 │
│ 54-40dd-8ada-27a72923b5e3"}                                                                                                                                                      │
│ {"level":"info","time":"2024-02-21T02:19:20.385376771Z","caller":"vsphere/utils.go:265","msg":"Defaulting timeout for vCenter Client to 5 minutes","TraceId":"e4e5d712-5354-40dd │
│ -8ada-27a72923b5e3"}                                                                                                                                                             │
│ {"level":"info","time":"2024-02-21T02:19:20.415464795Z","caller":"common/authmanager.go:279","msg":"No vSAN datastores found for vCenter \"HQ-ABC-VCSA01.ABC.LOCAL\"","TraceId": │
│ "ec548aeb-4c0f-4389-bae1-fb8f5eb18607"}                                                                                                                                          │
│ {"level":"info","time":"2024-02-21T02:19:20.415506301Z","caller":"common/authmanager.go:183","msg":"auth manager: newFsEnabledClusterToDsMap is updated to map[] for vCenter \"H │
│ Q-ABC-VCSA01.ABC.LOCAL\"","TraceId":"ec548aeb-4c0f-4389-bae1-fb8f5eb18607"}                                                                                                      │
│ {"level":"info","time":"2024-02-21T02:19:20.416027403Z","caller":"common/authmanager.go:163","msg":"auth manager: datastoreMapForBlockVolumes is updated to map[ds:///vmfs/volum │
│ es/5f59d3dd-56c2574c-9bd4-bc4a56032b0c/:Datastore: Datastore:datastore-34, datastore URL: ds:///vmfs/volumes/5f59d3dd-56c2574c-9bd4-bc4a56032b0c/ ds:///vmfs/volumes/5f59f202-22 │
│ 8e6fae-47d4-bc4a560326b0/:Datastore: Datastore:datastore-45, datastore URL: ds:///vmfs/volumes/5f59f202-228e6fae-47d4-bc4a560326b0/ ds:///vmfs/volumes/5f5a0731-410805c0-5140-bc │
│ 4a56032f20/:Datastore: Datastore:datastore-51, datastore URL: ds:///vmfs/volumes/5f5a0731-410805c0-5140-bc4a56032f20/ ds:///vmfs/volumes/5f5a09c2-01f34f9e-aa4e-bc4a56032a10/:Da │
│ tastore: Datastore:datastore-25, datastore URL: ds:///vmfs/volumes/5f5a09c2-01f34f9e-aa4e-bc4a56032a10/ ds:///vmfs/volumes/5f5a0e53-9c9d0208-8e57-bc4a560329a4/:Datastore: Datas │
│ tore:datastore-13046, datastore URL: ds:///vmfs/volumes/5f5a0e53-9c9d0208-8e57-bc4a560329a4/ ds:///vmfs/volumes/5f5ddbde-9bf31862-60dd-bc4a5602fdc4/:Datastore: Datastore:datast │
│ ore-48, datastore URL: ds:///vmfs/volumes/5f5ddbde-9bf31862-60dd-bc4a5602fdc4/ ds:///vmfs/volumes/5f5de184-e4dcdeec-7887-bc4a56032980/:Datastore: Datastore:datastore-28, datast │
│ ore URL: ds:///vmfs/volumes/5f5de184-e4dcdeec-7887-bc4a56032980/ ds:///vmfs/volumes/5f5de215-43b36590-8343-bc4a56030dcc/:Datastore: Datastore:datastore-54, datastore URL: ds:// │
│ /vmfs/volumes/5f5de215-43b36590-8343-bc4a56030dcc/ ds:///vmfs/volumes/5f5de4b3-4272cdd6-42fd-bc4a5603268c/:Datastore: Datastore:datastore-13050, datastore URL: ds:///vmfs/volum │
│ es/5f5de4b3-4272cdd6-42fd-bc4a5603268c/ ds:///vmfs/volumes/5f5e03fb-fc2e3634-c91b-bc4a560329ec/:Datastore: Datastore:datastore-41, datastore URL: ds:///vmfs/volumes/5f5e03fb-fc │
│ 2e3634-c91b-bc4a560329ec/ ds:///vmfs/volumes/5f60ad44-6eeef938-be69-bc4a56032b0c/:Datastore: Datastore:datastore-35, datastore URL: ds:///vmfs/volumes/5f60ad44-6eeef938-be69-bc │
│ 4a56032b0c/ ds:///vmfs/volumes/5f633e92-f4c6fb60-0f55-bc4a56032a10/:Datastore: Datastore:datastore-55, datastore URL: ds:///vmfs/volumes/5f633e92-f4c6fb60-0f55-bc4a56032a10/ ds │
│ :///vmfs/volumes/5f633f1a-c9ab0b12-7d93-bc4a56032980/:Datastore: Datastore:datastore-56, datastore URL: ds:///vmfs/volumes/5f633f1a-c9ab0b12-7d93-bc4a56032980/ ds:///vmfs/volum │
│ es/5f633f5b-413189c6-8e0e-bc4a560329a4/:Datastore: Datastore:datastore-57, datastore URL: ds:///vmfs/volumes/5f633f5b-413189c6-8e0e-bc4a560329a4/ ds:///vmfs/volumes/5f633fc3-0f │
│ ed51ac-7511-bc4a5603268c/:Datastore: Datastore:datastore-58, datastore URL: ds:///vmfs/volumes/5f633fc3-0fed51ac-7511-bc4a5603268c/ ds:///vmfs/volumes/5f63413f-a96aa1a6-06d0-bc │
│ 4a56032f20/:Datastore: Datastore:datastore-63, datastore URL: ds:///vmfs/volumes/5f63413f-a96aa1a6-06d0-bc4a56032f20/ ds:///vmfs/volumes/5f63417d-1566740a-c2e9-bc4a56032f20/:Da │
│ tastore: Datastore:datastore-64, datastore URL: ds:///vmfs/volumes/5f63417d-1566740a-c2e9-bc4a56032f20/ ds:///vmfs/volumes/5f6341c3-e544e6f0-0c38-bc4a56030dcc/:Datastore: Datas │
│ tore:datastore-65, datastore URL: ds:///vmfs/volumes/5f6341c3-e544e6f0-0c38-bc4a56030dcc/ ds:///vmfs/volumes/5f6341f8-2e689750-6356-bc4a56030dcc/:Datastore: Datastore:datastore │
│ -66, datastore URL: ds:///vmfs/volumes/5f6341f8-2e689750-6356-bc4a56030dcc/ ds:///vmfs/volumes/5f6342a3-ec0eac9e-0b81-bc4a56032b0c/:Datastore: Datastore:datastore-67, datastore │
│  URL: ds:///vmfs/volumes/5f6342a3-ec0eac9e-0b81-bc4a56032b0c/ ds:///vmfs/volumes/5f6342da-f624f172-2e22-bc4a56032b0c/:Datastore: Datastore:datastore-68, datastore URL: ds:///vm │
│ fs/volumes/5f6342da-f624f172-2e22-bc4a56032b0c/ ds:///vmfs/volumes/5f63430e-9ca4c746-ed01-bc4a56032b0c/:Datastore: Datastore:datastore-69, datastore URL: ds:///vmfs/volumes/5f6 │
│ 3430e-9ca4c746-ed01-bc4a56032b0c/ ds:///vmfs/volumes/5f634340-037e7512-8a02-bc4a560329ec/:Datastore: Datastore:datastore-70, datastore URL: ds:///vmfs/volumes/5f634340-037e7512 │
│ -8a02-bc4a560329ec/ ds:///vmfs/volumes/5f634377-e0ef3646-238d-bc4a560329ec/:Datastore: Datastore:datastore-71, datastore URL: ds:///vmfs/volumes/5f634377-e0ef3646-238d-bc4a5603 │
│ 29ec/ ds:///vmfs/volumes/5f6343a7-88f2cc92-5402-bc4a560329ec/:Datastore: Datastore:datastore-72, datastore URL: ds:///vmfs/volumes/5f6343a7-88f2cc92-5402-bc4a560329ec/ ds:///vm │
│ fs/volumes/5f7da5dd-8c3a0aae-9a6e-bc4a56032f20/:Datastore: Datastore:datastore-95, datastore URL: ds:///vmfs/volumes/5f7da5dd-8c3a0aae-9a6e-bc4a56032f20/ ds:///vmfs/volumes/5f7 │
│ da646-10b264ca-08a2-bc4a56032a10/:Datastore: Datastore:datastore-96, datastore URL: ds:///vmfs/volumes/5f7da646-10b264ca-08a2-bc4a56032a10/ ds:///vmfs/volumes/5fba22d0-28b4669e │
│ -ef5b-bc4a560329ec/:Datastore: Datastore:datastore-591, datastore URL: ds:///vmfs/volumes/5fba22d0-28b4669e-ef5b-bc4a560329ec/ ds:///vmfs/volumes/5fcf369d-b8e09b4e-2432-bc4a560 │
│ 32b0c/:Datastore: Datastore:datastore-612, datastore URL: ds:///vmfs/volumes/5fcf369d-b8e09b4e-2432-bc4a56032b0c/ ds:///vmfs/volumes/601fd1a1-c41bb032-ae91-bc4a56032b0c/:Datast │
│ ore: Datastore:datastore-800, datastore URL: ds:///vmfs/volumes/601fd1a1-c41bb032-ae91-bc4a56032b0c/ ds:///vmfs/volumes/60602b9f-4f9b2578-14b5-bc4a56032b0c/:Datastore: Datastor │
│ e:datastore-1286, datastore URL: ds:///vmfs/volumes/60602b9f-4f9b2578-14b5-bc4a56032b0c/ ds:///vmfs/volumes/60602c1a-9a358f6a-5c2d-bc4a560329ec/:Datastore: Datastore:datastore- │
│ 1287, datastore URL: ds:///vmfs/volumes/60602c1a-9a358f6a-5c2d-bc4a560329ec/ ds:///vmfs/volumes/60ab7ea5-f600a532-069a-bc4a56032a10/:Datastore: Datastore:datastore-1507, datast │
│ ore URL: ds:///vmfs/volumes/60ab7ea5-f600a532-069a-bc4a56032a10/ ds:///vmfs/volumes/60ab7f19-5431c080-e1e2-bc4a56032a10/:Datastore: Datastore:datastore-1508, datastore URL: ds: │
│ ///vmfs/volumes/60ab7f19-5431c080-e1e2-bc4a56032a10/ ds:///vmfs/volumes/61f7c05c-c1b7f45c-6b19-bc4a560329ec/:Datastore: Datastore:datastore-3598, datastore URL: ds:///vmfs/volu │
│ mes/61f7c05c-c1b7f45c-6b19-bc4a560329ec/ ds:///vmfs/volumes/6202505e-22c1a774-873d-bc4a56032b0c/:Datastore: Datastore:datastore-3686, datastore URL: ds:///vmfs/volumes/6202505e │
│ -22c1a774-873d-bc4a56032b0c/ ds:///vmfs/volumes/6202507a-de5f2f6e-7a86-bc4a56032b0c/:Datastore: Datastore:datastore-3687, datastore URL: ds:///vmfs/volumes/6202507a-de5f2f6e-7a │
│ f5757f-bb90-6cfe5471726c/ ds:///vmfs/volumes/65843c61-ad94a27c-5d1a-b4969184b22c/:Datastore: Datastore:datastore-12351, datastore URL: ds:///vmfs/volumes/65843c61-ad94a27c-5d1a │
│ -b4969184b22c/ ds:///vmfs/volumes/65b9eaca-1f5a83f4-d9a0-b4969184af34/:Datastore: Datastore:datastore-12926, datastore URL: ds:///vmfs/volumes/65b9eaca-1f5a83f4-d9a0-b4969184af │
│ 34/] for vCenter \"HQ-ABC-VCSA01.ABC.LOCAL\"","TraceId":"f7b6beb9-a6bf-46d8-866a-089bd1ab6c17"}                                                                                  │
│ {"level":"info","time":"2024-02-21T02:19:20.487611032Z","caller":"vsphere/utils.go:409","msg":"New tag manager with useragent 'k8s-csi-useragent-9a47d76c-0f1f-4622-89ef-82ee05e │
│ 7d62b'","TraceId":"e4e5d712-5354-40dd-8ada-27a72923b5e3"}                                                                                                                        │
│ {"level":"warn","time":"2024-02-21T02:19:20.584830123Z","caller":"common/topology.go:426","msg":"failed to retrieve tags for category \"cns.vmware.topology-preferred-datastores │
│ \" in vCenter \"HQ-ABC-VCSA01.ABC.LOCAL\". Reason: GET https://HQ-ABC-VCSA01.ABC.LOCAL:443/rest/com/vmware/cis/tagging/category/id:cns.vmware.topology-preferred-datastores: 404 │
│  Not Found","TraceId":"e4e5d712-5354-40dd-8ada-27a72923b5e3"}                                                                                                                    │
│ {"level":"info","time":"2024-02-21T02:24:20.385312604Z","caller":"k8sorchestrator/topology.go:230","msg":"Refreshing preferred datastores information...","TraceId":"b42b5cfb-dc │
│ 62-4bde-85f8-5fb1f814ef6a"}                                                                                                                                                      │
│ {"level":"info","time":"2024-02-21T02:24:20.386857882Z","caller":"vsphere/utils.go:265","msg":"Defaulting timeout for vCenter Client to 5 minutes","TraceId":"b42b5cfb-dc62-4bde │
│ -85f8-5fb1f814ef6a"}                                                                                                                                                             │
│ {"level":"info","time":"2024-02-21T02:24:20.433918718Z","caller":"common/authmanager.go:163","msg":"auth manager: datastoreMapForBlockVolumes is updated to map[ds:///vmfs/volum │
│ es/5f59d3dd-56c2574c-9bd4-bc4a56032b0c/:Datastore: Datastore:datastore-34, datastore URL: ds:///vmfs/volumes/5f59d3dd-56c2574c-9bd4-bc4a56032b0c/ ds:///vmfs/volumes/5f59f202-22 │
│ 8e6fae-47d4-bc4a560326b0/:Datastore: Datastore:datastore-45, datastore URL: ds:///vmfs/volumes/5f59f202-228e6fae-47d4-bc4a560326b0/ ds:///vmfs/volumes/5f5a0731-410805c0-5140-bc │
│ 4a56032f20/:Datastore: Datastore:datastore-51, datastore URL: ds:///vmfs/volumes/5f5a0731-410805c0-5140-bc4a56032f20/ ds:///vmfs/volumes/5f5a09c2-01f34f9e-aa4e-bc4a56032a10/:Da │
│ tastore: Datastore:datastore-25, datastore URL: ds:///vmfs/volumes/5f5a09c2-01f34f9e-aa4e-bc4a56032a10/ ds:///vmfs/volumes/5f5a0e53-9c9d0208-8e57-bc4a560329a4/:Datastore: Datas │
│ tore:datastore-13046, datastore URL: ds:///vmfs/volumes/5f5a0e53-9c9d0208-8e57-bc4a560329a4/ ds:///vmfs/volumes/5f5ddbde-9bf31862-60dd-bc4a5602fdc4/:Datastore: Datastore:datast │
│ ore-48, datastore URL: ds:///vmfs/volumes/5f5ddbde-9bf31862-60dd-bc4a5602fdc4/ ds:///vmfs/volumes/5f5de184-e4dcdeec-7887-bc4a56032980/:Datastore: Datastore:datastore-28, datast │
│ ore URL: ds:///vmfs/volumes/5f5de184-e4dcdeec-7887-bc4a56032980/ ds:///vmfs/volumes/5f5de215-43b36590-8343-bc4a56030dcc/:Datastore: Datastore:datastore-54, datastore URL: ds:// │
│ /vmfs/volumes/5f5de215-43b36590-8343-bc4a56030dcc/ ds:///vmfs/volumes/5f5de4b3-4272cdd6-42fd-bc4a5603268c/:Datastore: Datastore:datastore-13050, datastore URL: ds:///vmfs/volum │
│ es/5f5de4b3-4272cdd6-42fd-bc4a5603268c/ ds:///vmfs/volumes/5f5e03fb-fc2e3634-c91b-bc4a560329ec/:Datastore: Datastore:datastore-41, datastore URL: ds:///vmfs/volumes/5f5e03fb-fc │
│ 2e3634-c91b-bc4a560329ec/ ds:///vmfs/volumes/5f60ad44-6eeef938-be69-bc4a56032b0c/:Datastore: Datastore:datastore-35, datastore URL: ds:///vmfs/volumes/5f60ad44-6eeef938-be69-bc │
│ 4a56032b0c/ ds:///vmfs/volumes/5f633e92-f4c6fb60-0f55-bc4a56032a10/:Datastore: Datastore:datastore-55, datastore URL: ds:///vmfs/volumes/5f633e92-f4c6fb60-0f55-bc4a56032a10/ ds │
│ :///vmfs/volumes/5f633f1a-c9ab0b12-7d93-bc4a56032980/:Datastore: Datastore:datastore-56, datastore URL: ds:///vmfs/volumes/5f633f1a-c9ab0b12-7d93-bc4a56032980/ ds:///vmfs/volum │
│ es/5f633f5b-413189c6-8e0e-bc4a560329a4/:Datastore: Datastore:datastore-57, datastore URL: ds:///vmfs/volumes/5f633f5b-413189c6-8e0e-bc4a560329a4/ ds:///vmfs/volumes/5f633fc3-0f │
│ ed51ac-7511-bc4a5603268c/:Datastore: Datastore:datastore-58, datastore URL: ds:///vmfs/volumes/5f633fc3-0fed51ac-7511-bc4a5603268c/ ds:///vmfs/volumes/5f63413f-a96aa1a6-06d0-bc │
│ 4a56032f20/:Datastore: Datastore:datastore-63, datastore URL: ds:///vmfs/volumes/5f63413f-a96aa1a6-06d0-bc4a56032f20/ ds:///vmfs/volumes/5f63417d-1566740a-c2e9-bc4a56032f20/:Da │
│ tastore: Datastore:datastore-64, datastore URL: ds:///vmfs/volumes/5f63417d-1566740a-c2e9-bc4a56032f20/ ds:///vmfs/volumes/5f6341c3-e544e6f0-0c38-bc4a56030dcc/:Datastore: Datas │
│ tore:datastore-65, datastore URL: ds:///vmfs/volumes/5f6341c3-e544e6f0-0c38-bc4a56030dcc/ ds:///vmfs/volumes/5f6341f8-2e689750-6356-bc4a56030dcc/:Datastore: Datastore:datastore │
│ -66, datastore URL: ds:///vmfs/volumes/5f6341f8-2e689750-6356-bc4a56030dcc/ ds:///vmfs/volumes/5f6342a3-ec0eac9e-0b81-bc4a56032b0c/:Datastore: Datastore:datastore-67, datastore │
│  URL: ds:///vmfs/volumes/5f6342a3-ec0eac9e-0b81-bc4a56032b0c/ ds:///vmfs/volumes/5f6342da-f624f172-2e22-bc4a56032b0c/:Datastore: Datastore:datastore-68, datastore URL: ds:///vm │
│ fs/volumes/5f6342da-f624f172-2e22-bc4a56032b0c/ ds:///vmfs/volumes/5f63430e-9ca4c746-ed01-bc4a56032b0c/:Datastore: Datastore:datastore-69, datastore URL: ds:///vmfs/volumes/5f6 │
│ 3430e-9ca4c746-ed01-bc4a56032b0c/ ds:///vmfs/volumes/5f634340-037e7512-8a02-bc4a560329ec/:Datastore: Datastore:datastore-70, datastore URL: ds:///vmfs/volumes/5f634340-037e7512 │
│ -8a02-bc4a560329ec/ ds:///vmfs/volumes/5f634377-e0ef3646-238d-bc4a560329ec/:Datastore: Datastore:datastore-71, datastore URL: ds:///vmfs/volumes/5f634377-e0ef3646-238d-bc4a5603 │
│ 29ec/ ds:///vmfs/volumes/5f6343a7-88f2cc92-5402-bc4a560329ec/:Datastore: Datastore:datastore-72, datastore URL: ds:///vmfs/volumes/5f6343a7-88f2cc92-5402-bc4a560329ec/ ds:///vm │
│ fs/volumes/5f7da5dd-8c3a0aae-9a6e-bc4a56032f20/:Datastore: Datastore:datastore-95, datastore URL: ds:///vmfs/volumes/5f7da5dd-8c3a0aae-9a6e-bc4a56032f20/ ds:///vmfs/volumes/5f7 │
│ da646-10b264ca-08a2-bc4a56032a10/:Datastore: Datastore:datastore-96, datastore URL: ds:///vmfs/volumes/5f7da646-10b264ca-08a2-bc4a56032a10/ ds:///vmfs/volumes/5fba22d0-28b4669e │
│ -ef5b-bc4a560329ec/:Datastore: Datastore:datastore-591, datastore URL: ds:///vmfs/volumes/5fba22d0-28b4669e-ef5b-bc4a560329ec/ ds:///vmfs/volumes/5fcf369d-b8e09b4e-2432-bc4a560 │
│ 32b0c/:Datastore: Datastore:datastore-612, datastore URL: ds:///vmfs/volumes/5fcf369d-b8e09b4e-2432-bc4a56032b0c/ ds:///vmfs/volumes/601fd1a1-c41bb032-ae91-bc4a56032b0c/:Datast │
│ ore: Datastore:datastore-800, datastore URL: ds:///vmfs/volumes/601fd1a1-c41bb032-ae91-bc4a56032b0c/ ds:///vmfs/volumes/60602b9f-4f9b2578-14b5-bc4a56032b0c/:Datastore: Datastor │
│ e:datastore-1286, datastore URL: ds:///vmfs/volumes/60602b9f-4f9b2578-14b5-bc4a56032b0c/ ds:///vmfs/volumes/60602c1a-9a358f6a-5c2d-bc4a560329ec/:Datastore: Datastore:datastore- │
│ 1287, datastore URL: ds:///vmfs/volumes/60602c1a-9a358f6a-5c2d-bc4a560329ec/ ds:///vmfs/volumes/60ab7ea5-f600a532-069a-bc4a56032a10/:Datastore: Datastore:datastore-1507, datast │
│ ore URL: ds:///vmfs/volumes/60ab7ea5-f600a532-069a-bc4a56032a10/ ds:///vmfs/volumes/60ab7f19-5431c080-e1e2-bc4a56032a10/:Datastore: Datastore:datastore-1508, datastore URL: ds: │
│ ///vmfs/volumes/60ab7f19-5431c080-e1e2-bc4a56032a10/ ds:///vmfs/volumes/61f7c05c-c1b7f45c-6b19-bc4a560329ec/:Datastore: Datastore:datastore-3598, datastore URL: ds:///vmfs/volu │
│  f5757f-bb90-6cfe5471726c/ ds:///vmfs/volumes/65843c61-ad94a27c-5d1a-b4969184b22c/:Datastore: Datastore:datastore-12351, datastore URL: ds:///vmfs/volumes/65843c61-ad94a27c-5d1a │
│ -b4969184b22c/ ds:///vmfs/volumes/65b9eaca-1f5a83f4-d9a0-b4969184af34/:Datastore: Datastore:datastore-12926, datastore URL: ds:///vmfs/volumes/65b9eaca-1f5a83f4-d9a0-b4969184af │
│ 34/] for vCenter \"HQ-ABC-VCSA01.ABC.LOCAL\"","TraceId":"31313e3e-cf7d-4f59-98fe-93bd30c7ec3e"}                                                                                  │
│ {"level":"info","time":"2024-02-21T02:24:20.445590752Z","caller":"common/authmanager.go:279","msg":"No vSAN datastores found for vCenter \"HQ-ABC-VCSA01.ABC.LOCAL\"","TraceId": │
│ "f5dd7332-702f-4e55-aa20-d297dd5a2a8d"}                                                                                                                                          │
│ {"level":"info","time":"2024-02-21T02:24:20.445639362Z","caller":"common/authmanager.go:183","msg":"auth manager: newFsEnabledClusterToDsMap is updated to map[] for vCenter \"H │
│ Q-ABC-VCSA01.ABC.LOCAL\"","TraceId":"f5dd7332-702f-4e55-aa20-d297dd5a2a8d"}                                                                                                      │
│ {"level":"info","time":"2024-02-21T02:24:20.497558232Z","caller":"vsphere/utils.go:409","msg":"New tag manager with useragent 'k8s-csi-useragent-9a47d76c-0f1f-4622-89ef-82ee05e │
│ 7d62b'","TraceId":"b42b5cfb-dc62-4bde-85f8-5fb1f814ef6a"}                                                                                                                        │
│ {"level":"warn","time":"2024-02-21T02:24:20.585473776Z","caller":"common/topology.go:426","msg":"failed to retrieve tags for category \"cns.vmware.topology-preferred-datastores │
│ \" in vCenter \"HQ-ABC-VCSA01.ABC.LOCAL\". Reason: GET https://HQ-ABC-VCSA01.ABC.LOCAL:443/rest/com/vmware/cis/tagging/category/id:cns.vmware.topology-preferred-datastores: 404 │
│  Not Found","TraceId":"b42b5cfb-dc62-4bde-85f8-5fb1f814ef6a"}

$ k logs vsphere-csi-node-4bp6l -c node-driver-registrar

I0221 02:04:18.151443       1 main.go:167] Version: v2.8.0
I0221 02:04:18.151658       1 main.go:168] Running node-driver-registrar in mode=registration
I0221 02:04:18.152643       1 main.go:192] Attempting to open a gRPC connection with: "/csi/csi.sock"
I0221 02:04:18.152739       1 connection.go:164] Connecting to unix:///csi/csi.sock
I0221 02:04:19.153935       1 main.go:199] Calling CSI driver to discover driver name
I0221 02:04:19.154064       1 connection.go:193] GRPC call: /csi.v1.Identity/GetPluginInfo
I0221 02:04:19.154079       1 connection.go:194] GRPC request: {}
I0221 02:04:19.160126       1 connection.go:200] GRPC response: {"name":"csi.vsphere.vmware.com","vendor_version":"v3.1.2"}
I0221 02:04:19.160147       1 connection.go:201] GRPC error: <nil>
I0221 02:04:19.160158       1 main.go:209] CSI driver name: "csi.vsphere.vmware.com"
I0221 02:04:19.160320       1 node_register.go:53] Starting Registration Server at: /registration/csi.vsphere.vmware.com-reg.sock
I0221 02:04:19.160859       1 node_register.go:62] Registration Server started at: /registration/csi.vsphere.vmware.com-reg.sock
I0221 02:04:19.161213       1 node_register.go:92] Skipping HTTP server because endpoint is set to: ""
I0221 02:04:19.230619       1 main.go:102] Received GetInfo call: &InfoRequest{}
I0221 02:04:19.231017       1 main.go:109] "Kubelet registration probe created" path="/var/lib/kubelet/plugins/csi.vsphere.vmware.com/registration"
I0221 02:05:02.263756       1 main.go:121] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:true,Error:,}

$ k logs vsphere-csi-node-4bp6l -c vsphere-csi-node

{"level":"info","time":"2024-02-21T02:04:18.870125832Z","caller":"logger/logger.go:41","msg":"Setting default log level to :\"PRODUCTION\""}
{"level":"info","time":"2024-02-21T02:04:18.871462547Z","caller":"vsphere-csi/main.go:56","msg":"Version : v3.1.2","TraceId":"2f80c44f-34cf-487e-a321-ef5e3504b79c"}
{"level":"info","time":"2024-02-21T02:04:18.871615169Z","caller":"vsphere-csi/main.go:73","msg":"Enable logging off for vCenter sessions on exit","TraceId":"2f80c44f-34cf-487e-a321-ef5e3504b79c"}
{"level":"info","time":"2024-02-21T02:04:18.875357793Z","caller":"logger/logger.go:41","msg":"Setting default log level to :\"PRODUCTION\""}
{"level":"info","time":"2024-02-21T02:04:18.875397419Z","caller":"k8sorchestrator/k8sorchestrator.go:251","msg":"Initializing k8sOrchestratorInstance","TraceId":"2f80c44f-34cf-487e-a321-ef5e3504b79c","TraceId":"aaa020c6-47b8-4d48-a4da-e220da2a9961"}
{"level":"info","time":"2024-02-21T02:04:18.875435173Z","caller":"kubernetes/kubernetes.go:85","msg":"k8s client using in-cluster config","TraceId":"2f80c44f-34cf-487e-a321-ef5e3504b79c","TraceId":"aaa020c6-47b8-4d48-a4da-e220da2a9961"}
{"level":"info","time":"2024-02-21T02:04:18.875590163Z","caller":"kubernetes/kubernetes.go:382","msg":"Setting client QPS to 100.000000 and Burst to 100.","TraceId":"2f80c44f-34cf-487e-a321-ef5e3504b79c","TraceId":"aaa020c6-47b8-4d48-a4da-e220da2a9961"}
{"level":"info","time":"2024-02-21T02:04:18.875837987Z","caller":"kubernetes/kubernetes.go:85","msg":"k8s client using in-cluster config","TraceId":"2f80c44f-34cf-487e-a321-ef5e3504b79c","TraceId":"aaa020c6-47b8-4d48-a4da-e220da2a9961"}
{"level":"info","time":"2024-02-21T02:04:18.875906838Z","caller":"kubernetes/kubernetes.go:382","msg":"Setting client QPS to 100.000000 and Burst to 100.","TraceId":"2f80c44f-34cf-487e-a321-ef5e3504b79c","TraceId":"aaa020c6-47b8-4d48-a4da-e220da2a9961"}
{"level":"info","time":"2024-02-21T02:04:18.876442048Z","caller":"kubernetes/informers.go:85","msg":"Created new informer factory for in-cluster client","TraceId":"2f80c44f-34cf-487e-a321-ef5e3504b79c","TraceId":"aaa020c6-47b8-4d48-a4da-e220da2a9961"}
{"level":"info","time":"2024-02-21T02:04:18.89652524Z","caller":"k8sorchestrator/k8sorchestrator.go:407","msg":"New internal feature states values stored successfully: map[async-query-volume:true block-volume-snapshot:true cnsmgr-suspend-create-volume:true csi-auth-check:true csi-internal-generated-cluster-id:true csi-migration:true csi-windows-support:true list-volumes:true listview-tasks:true max-pvscsi-targets-per-vm:true multi-vcenter-csi-topology:true online-volume-extend:true pv-to-backingdiskobjectid-mapping:false topology-preferential-datastores:true trigger-csi-fullsync:false]","TraceId":"2f80c44f-34cf-487e-a321-ef5e3504b79c","TraceId":"aaa020c6-47b8-4d48-a4da-e220da2a9961"}
{"level":"info","time":"2024-02-21T02:04:18.897245174Z","caller":"k8sorchestrator/k8sorchestrator.go:322","msg":"k8sOrchestratorInstance initialized","TraceId":"2f80c44f-34cf-487e-a321-ef5e3504b79c","TraceId":"aaa020c6-47b8-4d48-a4da-e220da2a9961"}
{"level":"info","time":"2024-02-21T02:04:18.899824637Z","caller":"k8sorchestrator/k8sorchestrator.go:647","msg":"configMapAdded: Internal feature state values from \"internal-feature-states.csi.vsphere.vmware.com\" stored successfully: map[async-query-volume:true block-volume-snapshot:true cnsmgr-suspend-create-volume:true csi-auth-check:true csi-internal-generated-cluster-id:true csi-migration:true csi-windows-support:true list-volumes:true listview-tasks:true max-pvscsi-targets-per-vm:true multi-vcenter-csi-topology:true online-volume-extend:true pv-to-backingdiskobjectid-mapping:false topology-preferential-datastores:true trigger-csi-fullsync:false]","TraceId":"0ce80253-1a6b-4584-833a-8582b7d6cac1"}
{"level":"info","time":"2024-02-21T02:04:18.904608305Z","caller":"service/driver.go:109","msg":"Configured: \"csi.vsphere.vmware.com\" with clusterFlavor: \"VANILLA\" and mode: \"node\"","TraceId":"2f80c44f-34cf-487e-a321-ef5e3504b79c","TraceId":"aaa020c6-47b8-4d48-a4da-e220da2a9961"}
{"level":"info","time":"2024-02-21T02:04:18.914383661Z","caller":"service/server.go:131","msg":"identity service registered"}
{"level":"info","time":"2024-02-21T02:04:18.91442537Z","caller":"service/server.go:146","msg":"node service registered"}
{"level":"info","time":"2024-02-21T02:04:18.914434854Z","caller":"service/server.go:152","msg":"Listening for connections on address: /csi/csi.sock"}
{"level":"info","time":"2024-02-21T02:04:19.232792467Z","caller":"service/node.go:338","msg":"NodeGetInfo: called with args {XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}","TraceId":"1778ac26-3f84-4bcf-9203-9e40c4ef4e5b"}
{"level":"info","time":"2024-02-21T02:04:19.233070779Z","caller":"osutils/linux_os_utils.go:859","msg":"UUID is 42373709-336f-0c5e-7844-d2735763426c","TraceId":"1778ac26-3f84-4bcf-9203-9e40c4ef4e5b"}
{"level":"info","time":"2024-02-21T02:04:19.233104504Z","caller":"service/node.go:387","msg":"NodeGetInfo: MAX_VOLUMES_PER_NODE is set to 59","TraceId":"1778ac26-3f84-4bcf-9203-9e40c4ef4e5b"}
{"level":"info","time":"2024-02-21T02:04:19.233116602Z","caller":"kubernetes/kubernetes.go:85","msg":"k8s client using in-cluster config","TraceId":"1778ac26-3f84-4bcf-9203-9e40c4ef4e5b"}
{"level":"info","time":"2024-02-21T02:04:19.233287809Z","caller":"kubernetes/kubernetes.go:382","msg":"Setting client QPS to 100.000000 and Burst to 100.","TraceId":"1778ac26-3f84-4bcf-9203-9e40c4ef4e5b"}
{"level":"info","time":"2024-02-21T02:04:19.233303644Z","caller":"kubernetes/kubernetes.go:85","msg":"k8s client using in-cluster config","TraceId":"1778ac26-3f84-4bcf-9203-9e40c4ef4e5b"}
{"level":"info","time":"2024-02-21T02:04:19.233397509Z","caller":"kubernetes/kubernetes.go:382","msg":"Setting client QPS to 100.000000 and Burst to 100.","TraceId":"1778ac26-3f84-4bcf-9203-9e40c4ef4e5b"}
{"level":"info","time":"2024-02-21T02:04:19.244131026Z","caller":"k8sorchestrator/topology.go:713","msg":"Topology service initiated successfully","TraceId":"1778ac26-3f84-4bcf-9203-9e40c4ef4e5b"}
{"level":"info","time":"2024-02-21T02:04:19.253108028Z","caller":"k8sorchestrator/topology.go:763","msg":"Successfully patched CSINodeTopology instance: \"cube-worker-storage-01-test-dev.ABC.dc\" with Uuid: \"42373709-336f-0c5e-7844-d2735763426c\"","TraceId":"1778ac26-3f84-4bcf-9203-9e40c4ef4e5b"}
{"level":"info","time":"2024-02-21T02:04:19.253201187Z","caller":"k8sorchestrator/topology.go:979","msg":"Timeout is set to 1 minute(s)","TraceId":"1778ac26-3f84-4bcf-9203-9e40c4ef4e5b"}
{"level":"info","time":"2024-02-21T02:05:02.243902897Z","caller":"service/node.go:446","msg":"NodeGetInfo response: node_id:\"42373709-336f-0c5e-7844-d2735763426c\" max_volumes_per_node:59 accessible_topology:<> ","TraceId":"1778ac26-3f84-4bcf-9203-9e40c4ef4e5b"}

$ k logs --previous vsphere-csi-node-x4j5m -c node-driver-registrar

I0221 02:04:18.078240       1 main.go:167] Version: v2.8.0
I0221 02:04:18.078342       1 main.go:168] Running node-driver-registrar in mode=registration
I0221 02:04:18.079404       1 main.go:192] Attempting to open a gRPC connection with: "/csi/csi.sock"
I0221 02:04:18.079434       1 connection.go:164] Connecting to unix:///csi/csi.sock
I0221 02:04:19.083501       1 main.go:199] Calling CSI driver to discover driver name
I0221 02:04:19.083529       1 connection.go:193] GRPC call: /csi.v1.Identity/GetPluginInfo
I0221 02:04:19.083546       1 connection.go:194] GRPC request: {}
I0221 02:04:19.092767       1 connection.go:200] GRPC response: {"name":"csi.vsphere.vmware.com","vendor_version":"v3.1.2"}
I0221 02:04:19.092782       1 connection.go:201] GRPC error: <nil>
I0221 02:04:19.092792       1 main.go:209] CSI driver name: "csi.vsphere.vmware.com"
I0221 02:04:19.092818       1 node_register.go:53] Starting Registration Server at: /registration/csi.vsphere.vmware.com-reg.sock
I0221 02:04:19.093124       1 node_register.go:62] Registration Server started at: /registration/csi.vsphere.vmware.com-reg.sock
I0221 02:04:19.093304       1 node_register.go:92] Skipping HTTP server because endpoint is set to: ""
I0221 02:04:20.065611       1 main.go:102] Received GetInfo call: &InfoRequest{}
I0221 02:04:20.065979       1 main.go:109] "Kubelet registration probe created" path="/var/lib/kubelet/plugins/csi.vsphere.vmware.com/registration"
I0221 02:04:47.876326       1 main.go:121] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:false,Error:RegisterPlugin error -- plugin registration failed with err: rpc error: code = Unavailable desc = error reading from server: EOF,}
E0221 02:04:47.876399       1 main.go:123] Registration process failed with error: RegisterPlugin error -- plugin registration failed with err: rpc error: code = Unavailable desc = error reading from server: EOF, restarting registration container.

k logs --previous vsphere-csi-node-x4j5m -c vsphere-csi-node

{"level":"info","time":"2024-02-21T02:24:38.594167579Z","caller":"logger/logger.go:41","msg":"Setting default log level to :\"PRODUCTION\""}
{"level":"info","time":"2024-02-21T02:24:38.594605629Z","caller":"vsphere-csi/main.go:56","msg":"Version : v3.1.2","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e"}
{"level":"info","time":"2024-02-21T02:24:38.594729446Z","caller":"vsphere-csi/main.go:73","msg":"Enable logging off for vCenter sessions on exit","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e"}
{"level":"info","time":"2024-02-21T02:24:38.595165743Z","caller":"logger/logger.go:41","msg":"Setting default log level to :\"PRODUCTION\""}
{"level":"info","time":"2024-02-21T02:24:38.595441566Z","caller":"k8sorchestrator/k8sorchestrator.go:251","msg":"Initializing k8sOrchestratorInstance","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e","TraceId":"92e5ea38-cbcf-449d-94e4-dbd4b285aae5"}
{"level":"info","time":"2024-02-21T02:24:38.595538142Z","caller":"kubernetes/kubernetes.go:85","msg":"k8s client using in-cluster config","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e","TraceId":"92e5ea38-cbcf-449d-94e4-dbd4b285aae5"}
{"level":"info","time":"2024-02-21T02:24:38.596149272Z","caller":"kubernetes/kubernetes.go:382","msg":"Setting client QPS to 100.000000 and Burst to 100.","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e","TraceId":"92e5ea38-cbcf-449d-94e4-dbd4b285aae5"}
{"level":"info","time":"2024-02-21T02:24:38.596975244Z","caller":"kubernetes/kubernetes.go:85","msg":"k8s client using in-cluster config","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e","TraceId":"92e5ea38-cbcf-449d-94e4-dbd4b285aae5"}
{"level":"info","time":"2024-02-21T02:24:38.597420818Z","caller":"kubernetes/kubernetes.go:382","msg":"Setting client QPS to 100.000000 and Burst to 100.","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e","TraceId":"92e5ea38-cbcf-449d-94e4-dbd4b285aae5"}
{"level":"info","time":"2024-02-21T02:24:38.597767769Z","caller":"kubernetes/informers.go:85","msg":"Created new informer factory for in-cluster client","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e","TraceId":"92e5ea38-cbcf-449d-94e4-dbd4b285aae5"}
{"level":"info","time":"2024-02-21T02:24:38.606979186Z","caller":"k8sorchestrator/k8sorchestrator.go:407","msg":"New internal feature states values stored successfully: map[async-query-volume:true block-volume-snapshot:true cnsmgr-suspend-create-volume:true csi-auth-check:true csi-internal-generated-cluster-id:true csi-migration:true csi-windows-support:true list-volumes:true listview-tasks:true max-pvscsi-targets-per-vm:true multi-vcenter-csi-topology:true online-volume-extend:true pv-to-backingdiskobjectid-mapping:false topology-preferential-datastores:true trigger-csi-fullsync:false]","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e","TraceId":"92e5ea38-cbcf-449d-94e4-dbd4b285aae5"}
{"level":"info","time":"2024-02-21T02:24:38.607141745Z","caller":"k8sorchestrator/k8sorchestrator.go:322","msg":"k8sOrchestratorInstance initialized","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e","TraceId":"92e5ea38-cbcf-449d-94e4-dbd4b285aae5"}
{"level":"info","time":"2024-02-21T02:24:38.610823052Z","caller":"k8sorchestrator/k8sorchestrator.go:647","msg":"configMapAdded: Internal feature state values from \"internal-feature-states.csi.vsphere.vmware.com\" stored successfully: map[async-query-volume:true block-volume-snapshot:true cnsmgr-suspend-create-volume:true csi-auth-check:true csi-internal-generated-cluster-id:true csi-migration:true csi-windows-support:true list-volumes:true listview-tasks:true max-pvscsi-targets-per-vm:true multi-vcenter-csi-topology:true online-volume-extend:true pv-to-backingdiskobjectid-mapping:false topology-preferential-datastores:true trigger-csi-fullsync:false]","TraceId":"c0304904-850f-48b6-8333-43e622ffd93e"}
{"level":"info","time":"2024-02-21T02:24:38.611292887Z","caller":"service/driver.go:109","msg":"Configured: \"csi.vsphere.vmware.com\" with clusterFlavor: \"VANILLA\" and mode: \"node\"","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e","TraceId":"92e5ea38-cbcf-449d-94e4-dbd4b285aae5"}
{"level":"info","time":"2024-02-21T02:24:38.611645216Z","caller":"service/server.go:131","msg":"identity service registered"}
{"level":"info","time":"2024-02-21T02:24:38.611682199Z","caller":"service/server.go:146","msg":"node service registered"}
{"level":"info","time":"2024-02-21T02:24:38.611699282Z","caller":"service/server.go:152","msg":"Listening for connections on address: /csi/csi.sock"}
{"level":"info","time":"2024-02-21T02:25:07.861713443Z","caller":"vsphere-csi/main.go:88","msg":"SIGTERM signal received","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e"}
{"level":"info","time":"2024-02-21T02:25:07.861769196Z","caller":"utils/utils.go:230","msg":"Logging out all vCenter sessions","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e"}
{"level":"info","time":"2024-02-21T02:25:07.861793774Z","caller":"vsphere/virtualcentermanager.go:74","msg":"Initializing defaultVirtualCenterManager...","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e"}
{"level":"info","time":"2024-02-21T02:25:07.861814376Z","caller":"vsphere/virtualcentermanager.go:76","msg":"Successfully initialized defaultVirtualCenterManager","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e"}
{"level":"info","time":"2024-02-21T02:25:07.861820962Z","caller":"utils/utils.go:256","msg":"Successfully logged out vCenter sessions","TraceId":"dec2f036-0c44-4b2a-ad6a-4463a6343c5e"}

Any advice or suggestions regarding this issue would be greatly appreciated.

Thank you in advance for your help.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

3 participants