Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

response.body.children is not iterable TypeError: response.body.children is not iterable #376

Open
djjudas21 opened this issue Mar 21, 2024 · 4 comments

Comments

@djjudas21
Copy link

I've started trying to use snapshots in anger for the first time and I'm running into this error:

E0321 21:17:30.634398       1 snapshot_controller.go:124] checkandUpdateContentStatus [snapcontent-f1302406-9689-43ba-9273-fa4962c22221]: error occurred failed to take snapshot of the volume pvc-0ccd4181-398b-4d3c-ad97-bc876159b7c4: "rpc error: code = Internal desc = TypeError: response.body.children is not iterable TypeError: response.body.children is not iterable\n    at FreeNASApiDriver.CreateSnapshot (/home/csi/app/src/driver/freenas/api.js:3993:43)\n    at runMicrotasks (<anonymous>)\n    at processTicksAndRejections (node:internal/process/task_queues:96:5)\n    at async requestHandlerProxy (/home/csi/app/bin/democratic-csi:217:18)"
E0321 21:17:30.634425       1 snapshot_controller_base.go:283] could not sync content "snapcontent-f1302406-9689-43ba-9273-fa4962c22221": failed to take snapshot of the volume pvc-0ccd4181-398b-4d3c-ad97-bc876159b7c4: "rpc error: code = Internal desc = TypeError: response.body.children is not iterable TypeError: response.body.children is not iterable\n    at FreeNASApiDriver.CreateSnapshot (/home/csi/app/src/driver/freenas/api.js:3993:43)\n    at runMicrotasks (<anonymous>)\n    at processTicksAndRejections (node:internal/process/task_queues:96:5)\n    at async requestHandlerProxy (/home/csi/app/bin/democratic-csi:217:18)"
I0321 21:17:30.634477       1 snapshot_controller_base.go:185] Failed to sync content "snapcontent-f1302406-9689-43ba-9273-fa4962c22221", will retry again: failed to take snapshot of the volume pvc-0ccd4181-398b-4d3c-ad97-bc876159b7c4: "rpc error: code = Internal desc = TypeError: response.body.children is not iterable TypeError: response.body.children is not iterable\n    at FreeNASApiDriver.CreateSnapshot (/home/csi/app/src/driver/freenas/api.js:3993:43)\n    at runMicrotasks (<anonymous>)\n    at processTicksAndRejections (node:internal/process/task_queues:96:5)\n    at async requestHandlerProxy (/home/csi/app/bin/democratic-csi:217:18)"
I0321 21:17:30.634561       1 event.go:285] Event(v1.ObjectReference{Kind:"VolumeSnapshotContent", Namespace:"", Name:"snapcontent-f1302406-9689-43ba-9273-fa4962c22221", UID:"9e559b9c-1249-401d-8cde-b80b46787470", APIVersion:"snapshot.storage.k8s.io/v1", ResourceVersion:"159777225", FieldPath:""}): type: 'Warning' reason: 'SnapshotContentCheckandUpdateFailed' Failed to check and update snapshot content: failed to take snapshot of the volume pvc-0ccd4181-398b-4d3c-ad97-bc876159b7c4: "rpc error: code = Internal desc = TypeError: response.body.children is not iterable TypeError: response.body.children is not iterable\n    at FreeNASApiDriver.CreateSnapshot (/home/csi/app/src/driver/freenas/api.js:3993:43)\n    at runMicrotasks (<anonymous>)\n    at processTicksAndRejections (node:internal/process/task_queues:96:5)\n    at async requestHandlerProxy (/home/csi/app/bin/democratic-csi:217:18)"

I'm wondering if this is because I recently upgraded to TrueNAS SCALE 23.10.2 - I'm sure snapshots used to work last time I tried.

I'm deploying chart democratic-csi-0.14.51.0 with the following values (plus a redacted API key)

driver:
  config:
    driver: freenas-api-nfs
    instance_id: truenas
    httpConnection:
      protocol: http
      host: 192.168.0.2
      port: 80
      allowInsecure: true
    zfs:
      datasetProperties:
        "org.freenas:description": "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}/{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
      datasetParentName: ssd/k8s/vols
      detachedSnapshotsDatasetParentName: ssd/k8s/snaps
      datasetEnableQuotas: true
      datasetEnableReservation: false
      datasetPermissionsMode: "0777"
      datasetPermissionsUser: 0
      datasetPermissionsGroup: 0
    nfs:
      shareCommentTemplate: "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}-{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
      shareHost: 192.168.0.2
      shareAlldirs: false
      shareAllowedHosts: []
      shareAllowedNetworks: []
      shareMaprootUser: ''
      shareMaprootGroup: ''
      shareMapallUser: root
      shareMapallGroup: root


# https://kubernetes-csi.github.io/docs/csi-driver-object.html
csiDriver:
  # should be globally unique for a given cluster
  name: "org.democratic-csi.nfs"
  storageCapacity: true

controller:
  enabled: true
  rbac:
    enabled: true
  replicaCount: 1
  hostNetwork: false
  priorityClassName: ""

  # https://github.com/kubernetes-csi/external-health-monitor
  externalHealthMonitorController:
    enabled: false
    resources:

  livenessProbe:
    enabled: true

  # democratic-csi controller
  driver:
    enabled: true
    image: docker.io/democraticcsi/democratic-csi:latest
    # imagePullPolicy: Always
    logLevel: info

    resources:

node:
  rbac:
    enabled: true
  enabled: true
  hostNetwork: true
  kubeletHostPath: /var/snap/microk8s/common/var/lib/kubelet

  priorityClassName: ""

  livenessProbe:
    enabled: true

  # democratic-csi node
  driver:
    enabled: true
    image: docker.io/democraticcsi/democratic-csi:latest
    # imagePullPolicy: Always
    logLevel: info

    resources:
      requests:
        memory: 100Mi
        cpu: 25m

  # https://kubernetes-csi.github.io/docs/node-driver-registrar.html
  driverRegistrar:
    resources:

# add note here about volume expansion requirements
storageClasses:
- name: truenas
  defaultClass: false
  reclaimPolicy: Delete
  volumeBindingMode: Immediate
  allowVolumeExpansion: true
  parameters:
    fsType: nfs
    mountOptions:
    - noatime
    - nfsvers=4

volumeSnapshotClasses:
  - name: truenas
    annotations:
      k10.kasten.io/is-snapshot-class: "true"
    defaultClass: false
    deletionPolicy: Delete
@travisghansen
Copy link
Member

#301

Can you enable debug logging on the controller so I can see the raw http response? This problem has been elusive but if you can send over the response that is leading to the situation it's probably easy to fix.

@djjudas21
Copy link
Author

OK. Assuming the controller debug mode is enabled by setting

controller:
  driver:
    enabled: true
    logLevel: debug

This is what the external-snapshotter container spits out. Annoyingly, no error, and it seems to have created the snapshots properly. I guess restarting the controller to enable debugging has "fixed" the problem temporarily.

I0322 13:53:25.751035       1 snapshot_controller.go:103] createSnapshot for content [snapcontent-ffd0a256-4e73-461a-a36a-69ae09511c95]: started
I0322 13:53:25.751047       1 snapshot_controller.go:291] createSnapshotWrapper: Creating snapshot for content snapcontent-ffd0a256-4e73-461a-a36a-69ae09511c95 through the plugin ...
I0322 13:53:25.751059       1 snapshot_controller.go:205] getCSISnapshotInput for content [snapcontent-ffd0a256-4e73-461a-a36a-69ae09511c95]
I0322 13:53:25.751069       1 snapshot_controller.go:472] getSnapshotClass: VolumeSnapshotClassName [truenas]
I0322 13:53:25.751082       1 snapshot_controller.go:603] setAnnVolumeSnapshotBeingCreated: set annotation [snapshot.storage.kubernetes.io/volumesnapshot-being-created:yes] on content [snapcontent-ffd0a256-4e73-461a-a36a-69ae09511c95].
I0322 13:53:25.751232       1 snapshot_controller_base.go:162] enqueued "snapcontent-ffd0a256-4e73-461a-a36a-69ae09511c95" for sync
I0322 13:53:25.765396       1 util.go:216] storeObjectUpdate updating content "snapcontent-ffd0a256-4e73-461a-a36a-69ae09511c95" with version 160021205
I0322 13:53:25.765435       1 snapshot_controller.go:628] setAnnVolumeSnapshotBeingCreated: volume snapshot content &{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:snapcontent-ffd0a256-4e73-461a-a36a-69ae09511c95 GenerateName: Namespace: SelfLink: UID:41e7b5cd-6dbb-4331-8302-4d0ebdb8d6ec ResourceVersion:160021205 Generation:1 CreationTimestamp:2024-03-22 13:53:25 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[snapshot.storage.kubernetes.io/volumesnapshot-being-created:yes] OwnerReferences:[] Finalizers:[snapshot.storage.kubernetes.io/volumesnapshotcontent-bound-protection] ManagedFields:[{Manager:csi-snapshotter Operation:Update APIVersion:snapshot.storage.k8s.io/v1 Time:2024-03-22 13:53:25 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:snapshot.storage.kubernetes.io/volumesnapshot-being-created":{}}}} Subresource:} {Manager:snapshot-controller Operation:Update APIVersion:snapshot.storage.k8s.io/v1 Time:2024-03-22 13:53:25 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:finalizers":{".":{},"v:\"snapshot.storage.kubernetes.io/volumesnapshotcontent-bound-protection\"":{}}},"f:spec":{".":{},"f:deletionPolicy":{},"f:driver":{},"f:source":{".":{},"f:volumeHandle":{}},"f:volumeSnapshotClassName":{},"f:volumeSnapshotRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}} Subresource:}]} Spec:{VolumeSnapshotRef:{Kind:VolumeSnapshot Namespace:immich Name:k10-csi-snap-ldz7h8lsqmmj7m7g UID:ffd0a256-4e73-461a-a36a-69ae09511c95 APIVersion:snapshot.storage.k8s.io/v1 ResourceVersion:160021198 FieldPath:} DeletionPolicy:Delete Driver:org.democratic-csi.nfs VolumeSnapshotClassName:0xc0009109d0 Source:{VolumeHandle:0xc0009109c0 SnapshotHandle:<nil>} SourceVolumeMode:<nil>} Status:<nil>}
I0322 13:53:25.765703       1 snapshotter.go:56] CSI CreateSnapshot: snapshot-ffd0a256-4e73-461a-a36a-69ae09511c95
I0322 13:53:25.765722       1 connection.go:183] GRPC call: /csi.v1.Identity/GetPluginInfo
I0322 13:53:25.765734       1 connection.go:184] GRPC request: {}
I0322 13:53:25.767472       1 snapshot_controller_base.go:162] enqueued "snapcontent-ffd0a256-4e73-461a-a36a-69ae09511c95" for sync
I0322 13:53:25.770597       1 connection.go:186] GRPC response: {"name":"org.democratic-csi.nfs","vendor_version":"1.8.4"}
I0322 13:53:25.770682       1 connection.go:187] GRPC error: <nil>
I0322 13:53:25.770700       1 connection.go:183] GRPC call: /csi.v1.Controller/CreateSnapshot
I0322 13:53:25.770712       1 connection.go:184] GRPC request: {"name":"snapshot-ffd0a256-4e73-461a-a36a-69ae09511c95","source_volume_id":"pvc-57482218-4d41-4580-b559-4d065e3a31d0"}
I0322 13:53:26.801367       1 connection.go:186] GRPC response: {"snapshot":{"creation_time":{"seconds":1711115606},"ready_to_use":true,"size_bytes":4779984896,"snapshot_id":"pvc-57482218-4d41-4580-b559-4d065e3a31d0@snapshot-ffd0a256-4e73-461a-a36a-69ae09511c95","source_volume_id":"pvc-57482218-4d41-4580-b559-4d065e3a31d0"}}
I0322 13:53:26.801580       1 connection.go:187] GRPC error: <nil>
I0322 13:53:26.801593       1 snapshotter.go:76] CSI CreateSnapshot: snapshot-ffd0a256-4e73-461a-a36a-69ae09511c95 driver name [org.democratic-csi.nfs] snapshot ID [pvc-57482218-4d41-4580-b559-4d065e3a31d0@snapshot-ffd0a256-4e73-461a-a36a-69ae09511c95] time stamp [seconds:1711115606] size [4779984896] readyToUse [true]
I0322 13:53:26.801629       1 snapshot_controller.go:336] Created snapshot: driver org.democratic-csi.nfs, snapshotId pvc-57482218-4d41-4580-b559-4d065e3a31d0@snapshot-ffd0a256-4e73-461a-a36a-69ae09511c95, creationTime 2024-03-22 13:53:26 +0000 UTC, size 4779984896, readyToUse true
I0322 13:53:26.801653       1 snapshot_controller.go:417] updateSnapshotContentStatus: updating VolumeSnapshotContent [snapcontent-ffd0a256-4e73-461a-a36a-69ae09511c95], snapshotHandle pvc-57482218-4d41-4580-b559-4d065e3a31d0@snapshot-ffd0a256-4e73-461a-a36a-69ae09511c95, readyToUse true, createdAt 1711115606000000000, size 4779984896
I0322 13:53:26.816737       1 snapshot_controller_base.go:162] enqueued "snapcontent-ffd0a256-4e73-461a-a36a-69ae09511c95" for sync
I0322 13:53:26.827157       1 snapshot_controller.go:648] Removed VolumeSnapshotBeingCreated annotation from volume snapshot content snapcontent-ffd0a256-4e73-461a-a36a-69ae09511c95
I0322 13:53:26.827176       1 util.go:216] storeObjectUpdate updating content "snapcontent-ffd0a256-4e73-461a-a36a-69ae09511c95" with version 160021213
I0322 13:53:26.827184       1 util.go:216] storeObjectUpdate updating content "snapcontent-ffd0a256-4e73-461a-a36a-69ae09511c95" with version 160021213
I0322 13:53:26.827195       1 snapshot_controller_base.go:196] syncContentByKey[snapcontent-ffd0a256-4e73-461a-a36a-69ae09511c95]
I0322 13:53:26.827207       1 util.go:212] storeObjectUpdate: ignoring content "snapcontent-ffd0a256-4e73-461a-a36a-69ae09511c95" version 160021212
I0322 13:53:26.828002       1 snapshot_controller_base.go:162] enqueued "snapcontent-ffd0a256-4e73-461a-a36a-69ae09511c95" for sync
I0322 13:53:26.828028       1 snapshot_controller_base.go:196] syncContentByKey[snapcontent-ffd0a256-4e73-461a-a36a-69ae09511c95]
I0322 13:53:26.828038       1 util.go:216] storeObjectUpdate updating content "snapcontent-ffd0a256-4e73-461a-a36a-69ae09511c95" with version 160021213
I0322 13:53:26.828044       1 snapshot_controller.go:56] synchronizing VolumeSnapshotContent[snapcontent-ffd0a256-4e73-461a-a36a-69ae09511c95]
I0322 13:53:26.828070       1 snapshot_controller.go:564] Check if VolumeSnapshotContent[snapcontent-ffd0a256-4e73-461a-a36a-69ae09511c95] should be deleted.
I0322 13:53:26.998006       1 leaderelection.go:278] successfully renewed lease democratic-csi/external-snapshotter-leader-org-democratic-csi-nfs
I0322 13:53:27.155348       1 snapshot_controller_base.go:162] enqueued "snapcontent-6dc56c8b-8eb6-4ed7-8508-0abf111320a5" for sync
I0322 13:53:27.155401       1 snapshot_controller_base.go:196] syncContentByKey[snapcontent-6dc56c8b-8eb6-4ed7-8508-0abf111320a5]
I0322 13:53:27.500390       1 snapshot_controller_base.go:162] enqueued "snapcontent-77c7a6df-94aa-405d-8924-feb2854540df" for sync
I0322 13:53:27.500554       1 snapshot_controller_base.go:196] syncContentByKey[snapcontent-77c7a6df-94aa-405d-8924-feb2854540df]
I0322 13:53:27.500923       1 snapshot_controller_base.go:162] enqueued "snapcontent-6dc56c8b-8eb6-4ed7-8508-0abf111320a5" for sync
I0322 13:53:27.500938       1 snapshot_controller_base.go:196] syncContentByKey[snapcontent-6dc56c8b-8eb6-4ed7-8508-0abf111320a5]
I0322 13:53:27.501031       1 snapshot_controller_base.go:162] enqueued "snapcontent-77c7a6df-94aa-405d-8924-feb2854540df" for sync
I0322 13:53:27.501043       1 snapshot_controller_base.go:196] syncContentByKey[snapcontent-77c7a6df-94aa-405d-8924-feb2854540df]
I0322 13:53:28.019029       1 snapshot_controller_base.go:162] enqueued "snapcontent-6dc56c8b-8eb6-4ed7-8508-0abf111320a5" for sync
I0322 13:53:28.019080       1 snapshot_controller_base.go:196] syncContentByKey[snapcontent-6dc56c8b-8eb6-4ed7-8508-0abf111320a5]
I0322 13:53:28.336583       1 snapshot_controller_base.go:162] enqueued "snapcontent-1de1d73d-813b-41c1-a187-f2f34150e54b" for sync
I0322 13:53:28.336637       1 snapshot_controller_base.go:196] syncContentByKey[snapcontent-1de1d73d-813b-41c1-a187-f2f34150e54b]
I0322 13:53:28.336661       1 util.go:188] storeObjectUpdate: adding content "snapcontent-1de1d73d-813b-41c1-a187-f2f34150e54b", version 160021230
I0322 13:53:28.336679       1 snapshot_controller.go:56] synchronizing VolumeSnapshotContent[snapcontent-1de1d73d-813b-41c1-a187-f2f34150e54b]
I0322 13:53:28.336722       1 snapshot_controller.go:564] Check if VolumeSnapshotContent[snapcontent-1de1d73d-813b-41c1-a187-f2f34150e54b] should be deleted.
I0322 13:53:28.336735       1 snapshot_controller.go:75] syncContent: Call CreateSnapshot for content snapcontent-1de1d73d-813b-41c1-a187-f2f34150e54b
I0322 13:53:28.336747       1 snapshot_controller.go:103] createSnapshot for content [snapcontent-1de1d73d-813b-41c1-a187-f2f34150e54b]: started
I0322 13:53:28.336760       1 snapshot_controller.go:291] createSnapshotWrapper: Creating snapshot for content snapcontent-1de1d73d-813b-41c1-a187-f2f34150e54b through the plugin ...
I0322 13:53:28.336772       1 snapshot_controller.go:205] getCSISnapshotInput for content [snapcontent-1de1d73d-813b-41c1-a187-f2f34150e54b]
I0322 13:53:28.336784       1 snapshot_controller.go:472] getSnapshotClass: VolumeSnapshotClassName [truenas]
I0322 13:53:28.336797       1 snapshot_controller.go:603] setAnnVolumeSnapshotBeingCreated: set annotation [snapshot.storage.kubernetes.io/volumesnapshot-being-created:yes] on content [snapcontent-1de1d73d-813b-41c1-a187-f2f34150e54b].
I0322 13:53:28.356845       1 util.go:216] storeObjectUpdate updating content "snapcontent-1de1d73d-813b-41c1-a187-f2f34150e54b" with version 160021232
I0322 13:53:28.356895       1 snapshot_controller.go:628] setAnnVolumeSnapshotBeingCreated: volume snapshot content &{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:snapcontent-1de1d73d-813b-41c1-a187-f2f34150e54b GenerateName: Namespace: SelfLink: UID:98c0462c-aa2e-4f9a-b48b-b6975ea93af5 ResourceVersion:160021232 Generation:1 CreationTimestamp:2024-03-22 13:53:28 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[snapshot.storage.kubernetes.io/volumesnapshot-being-created:yes] OwnerReferences:[] Finalizers:[] ManagedFields:[{Manager:csi-snapshotter Operation:Update APIVersion:snapshot.storage.k8s.io/v1 Time:2024-03-22 13:53:28 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:snapshot.storage.kubernetes.io/volumesnapshot-being-created":{}}}} Subresource:} {Manager:snapshot-controller Operation:Update APIVersion:snapshot.storage.k8s.io/v1 Time:2024-03-22 13:53:28 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:spec":{".":{},"f:deletionPolicy":{},"f:driver":{},"f:source":{".":{},"f:volumeHandle":{}},"f:volumeSnapshotClassName":{},"f:volumeSnapshotRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}} Subresource:}]} Spec:{VolumeSnapshotRef:{Kind:VolumeSnapshot Namespace:immich Name:k10-csi-snap-xn7g8vfnfdtq7tfh UID:1de1d73d-813b-41c1-a187-f2f34150e54b APIVersion:snapshot.storage.k8s.io/v1 ResourceVersion:160021214 FieldPath:} DeletionPolicy:Delete Driver:org.democratic-csi.nfs VolumeSnapshotClassName:0xc0008880a0 Source:{VolumeHandle:0xc000888090 SnapshotHandle:<nil>} SourceVolumeMode:<nil>} Status:<nil>}
I0322 13:53:28.357113       1 snapshotter.go:56] CSI CreateSnapshot: snapshot-1de1d73d-813b-41c1-a187-f2f34150e54b
I0322 13:53:28.357132       1 connection.go:183] GRPC call: /csi.v1.Identity/GetPluginInfo
I0322 13:53:28.357144       1 connection.go:184] GRPC request: {}
I0322 13:53:28.358696       1 snapshot_controller_base.go:162] enqueued "snapcontent-1de1d73d-813b-41c1-a187-f2f34150e54b" for sync
I0322 13:53:28.361780       1 connection.go:186] GRPC response: {"name":"org.democratic-csi.nfs","vendor_version":"1.8.4"}
I0322 13:53:28.361881       1 connection.go:187] GRPC error: <nil>
I0322 13:53:28.361899       1 connection.go:183] GRPC call: /csi.v1.Controller/CreateSnapshot
I0322 13:53:28.361911       1 connection.go:184] GRPC request: {"name":"snapshot-1de1d73d-813b-41c1-a187-f2f34150e54b","source_volume_id":"pvc-0ccd4181-398b-4d3c-ad97-bc876159b7c4"}
I0322 13:53:29.535327       1 snapshot_controller_base.go:162] enqueued "snapcontent-1de1d73d-813b-41c1-a187-f2f34150e54b" for sync
I0322 13:53:30.133507       1 snapshot_controller_base.go:162] enqueued "snapcontent-7d9f4ade-9825-408a-83c1-1de1454f6955" for sync
I0322 13:53:30.133558       1 snapshot_controller_base.go:196] syncContentByKey[snapcontent-7d9f4ade-9825-408a-83c1-1de1454f6955]
I0322 13:53:30.133583       1 util.go:188] storeObjectUpdate: adding content "snapcontent-7d9f4ade-9825-408a-83c1-1de1454f6955", version 160021250
I0322 13:53:30.133602       1 snapshot_controller.go:56] synchronizing VolumeSnapshotContent[snapcontent-7d9f4ade-9825-408a-83c1-1de1454f6955]
I0322 13:53:30.133644       1 snapshot_controller.go:564] Check if VolumeSnapshotContent[snapcontent-7d9f4ade-9825-408a-83c1-1de1454f6955] should be deleted.
I0322 13:53:30.133656       1 snapshot_controller.go:75] syncContent: Call CreateSnapshot for content snapcontent-7d9f4ade-9825-408a-83c1-1de1454f6955
I0322 13:53:30.133668       1 snapshot_controller.go:103] createSnapshot for content [snapcontent-7d9f4ade-9825-408a-83c1-1de1454f6955]: started
I0322 13:53:30.133680       1 snapshot_controller.go:291] createSnapshotWrapper: Creating snapshot for content snapcontent-7d9f4ade-9825-408a-83c1-1de1454f6955 through the plugin ...
I0322 13:53:30.133692       1 snapshot_controller.go:205] getCSISnapshotInput for content [snapcontent-7d9f4ade-9825-408a-83c1-1de1454f6955]
I0322 13:53:30.133703       1 snapshot_controller.go:472] getSnapshotClass: VolumeSnapshotClassName [truenas]
I0322 13:53:30.133716       1 snapshot_controller.go:603] setAnnVolumeSnapshotBeingCreated: set annotation [snapshot.storage.kubernetes.io/volumesnapshot-being-created:yes] on content [snapcontent-7d9f4ade-9825-408a-83c1-1de1454f6955].
I0322 13:53:30.168017       1 util.go:216] storeObjectUpdate updating content "snapcontent-7d9f4ade-9825-408a-83c1-1de1454f6955" with version 160021252
I0322 13:53:30.168056       1 snapshot_controller.go:628] setAnnVolumeSnapshotBeingCreated: volume snapshot content &{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:snapcontent-7d9f4ade-9825-408a-83c1-1de1454f6955 GenerateName: Namespace: SelfLink: UID:1141a1e7-79e5-4967-9c42-8ee5f6cc52be ResourceVersion:160021252 Generation:1 CreationTimestamp:2024-03-22 13:53:30 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[snapshot.storage.kubernetes.io/volumesnapshot-being-created:yes] OwnerReferences:[] Finalizers:[] ManagedFields:[{Manager:csi-snapshotter Operation:Update APIVersion:snapshot.storage.k8s.io/v1 Time:2024-03-22 13:53:30 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:snapshot.storage.kubernetes.io/volumesnapshot-being-created":{}}}} Subresource:} {Manager:snapshot-controller Operation:Update APIVersion:snapshot.storage.k8s.io/v1 Time:2024-03-22 13:53:30 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:spec":{".":{},"f:deletionPolicy":{},"f:driver":{},"f:source":{".":{},"f:volumeHandle":{}},"f:volumeSnapshotClassName":{},"f:volumeSnapshotRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}} Subresource:}]} Spec:{VolumeSnapshotRef:{Kind:VolumeSnapshot Namespace:immich Name:k10-csi-snap-d5gkwmhtl98qtwbh UID:7d9f4ade-9825-408a-83c1-1de1454f6955 APIVersion:snapshot.storage.k8s.io/v1 ResourceVersion:160021226 FieldPath:} DeletionPolicy:Delete Driver:org.democratic-csi.nfs VolumeSnapshotClassName:0xc000910fa0 Source:{VolumeHandle:0xc000910f90 SnapshotHandle:<nil>} SourceVolumeMode:<nil>} Status:<nil>}
I0322 13:53:30.168363       1 snapshotter.go:56] CSI CreateSnapshot: snapshot-7d9f4ade-9825-408a-83c1-1de1454f6955
I0322 13:53:30.168469       1 connection.go:183] GRPC call: /csi.v1.Identity/GetPluginInfo
I0322 13:53:30.168529       1 connection.go:184] GRPC request: {}
I0322 13:53:30.171041       1 snapshot_controller_base.go:162] enqueued "snapcontent-7d9f4ade-9825-408a-83c1-1de1454f6955" for sync
I0322 13:53:30.175521       1 connection.go:186] GRPC response: {"name":"org.democratic-csi.nfs","vendor_version":"1.8.4"}
I0322 13:53:30.175601       1 connection.go:187] GRPC error: <nil>
I0322 13:53:30.175619       1 connection.go:183] GRPC call: /csi.v1.Controller/CreateSnapshot
I0322 13:53:30.175631       1 connection.go:184] GRPC request: {"name":"snapshot-7d9f4ade-9825-408a-83c1-1de1454f6955","source_volume_id":"pvc-6bbbb99f-5236-4f25-a18e-3eb33bfe81b9"}
I0322 13:53:30.560824       1 snapshot_controller_base.go:162] enqueued "snapcontent-6dc56c8b-8eb6-4ed7-8508-0abf111320a5" for sync
I0322 13:53:30.560881       1 snapshot_controller_base.go:196] syncContentByKey[snapcontent-6dc56c8b-8eb6-4ed7-8508-0abf111320a5]
I0322 13:53:30.561123       1 snapshot_controller_base.go:162] enqueued "snapcontent-6dc56c8b-8eb6-4ed7-8508-0abf111320a5" for sync
I0322 13:53:30.561180       1 snapshot_controller_base.go:196] syncContentByKey[snapcontent-6dc56c8b-8eb6-4ed7-8508-0abf111320a5]
I0322 13:53:31.131049       1 snapshot_controller_base.go:162] enqueued "snapcontent-7d9f4ade-9825-408a-83c1-1de1454f6955" for sync
I0322 13:53:32.088029       1 leaderelection.go:278] successfully renewed lease democratic-csi/external-snapshotter-leader-org-democratic-csi-nfs
I0322 13:53:33.322165       1 connection.go:186] GRPC response: {"snapshot":{"creation_time":{"seconds":1711115610},"ready_to_use":true,"size_bytes":1006264320,"snapshot_id":"pvc-0ccd4181-398b-4d3c-ad97-bc876159b7c4@snapshot-1de1d73d-813b-41c1-a187-f2f34150e54b","source_volume_id":"pvc-0ccd4181-398b-4d3c-ad97-bc876159b7c4"}}

@travisghansen
Copy link
Member

Ok, leave that enabled for a bit if you don't mind and send the logs from the csi-driver container instead if it ever pops up again.

Thanks!

@djjudas21
Copy link
Author

I've managed to reproduce similar API problems with a stuck volume not being provisioned, while democratic-csi is running with debug logging.

For completeness, here are the full logs:

Let me step through (partly to aid my own understanding) and hopefully I've captured the relevant bits of info.

Here is the PVC I created, which is stuck in Pending:

[jonathan@poseidon ~]$ kubectl get pvc jellyfin-media -n jellyfin
NAME             STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
jellyfin-media   Pending                                      truenas        <unset>                 12h

Describe view clearly shows there's something wrong with the provisioner.

[jonathan@poseidon ~]$ kubectl describe pvc jellyfin-media -n jellyfin
Name:          jellyfin-media
Namespace:     jellyfin
StorageClass:  truenas
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: org.democratic-csi.nfs
               volume.kubernetes.io/storage-provisioner: org.democratic-csi.nfs
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Used By:       <none>
Events:
  Type    Reason                Age                    From                                                                                                            Message
  ----    ------                ----                   ----                                                                                                            -------
  Normal  ExternalProvisioning  4m34s (x2141 over 8h)  persistentvolume-controller                                                                                     Waiting for a volume to be created either by the external provisioner 'org.democratic-csi.nfs' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
  Normal  Provisioning          2m49s (x169 over 10h)  org.democratic-csi.nfs_truenas-democratic-csi-controller-7f8b4dd8f9-b7vtv_175c0282-b6eb-4336-a402-04c3b130c63e  External provisioner is provisioning volume for claim "jellyfin/jellyfin-media"

In the democratic-csi namespace, it looks like the controller pod has a high restart count, and was last restarted after I created the PVC object above. The node pods have expected restart times of 12d and 24d ago due to node patching/rebooting.

[jonathan@poseidon democratic-csi]$ kubectl get po
NAME                                                 READY   STATUS    RESTARTS       AGE
truenas-democratic-csi-controller-7f8b4dd8f9-b7vtv   6/6     Running   162 (9h ago)   24d
truenas-democratic-csi-node-4znvs                    4/4     Running   24 (24d ago)   68d
truenas-democratic-csi-node-b22l4                    4/4     Running   8 (24d ago)    54d
truenas-democratic-csi-node-dznv5                    4/4     Running   22 (12d ago)   68d
truenas-democratic-csi-node-ldj9h                    4/4     Running   4 (24d ago)    68d
truenas-democratic-csi-node-nnvmn                    4/4     Running   12 (24d ago)   68d

Describe view on the controller pod shows the termination reason as Error (as opposed to OOMKill etc)

[jonathan@poseidon democratic-csi]$ kubectl describe po truenas-democratic-csi-controller-7f8b4dd8f9-b7vtv
Name:                 truenas-democratic-csi-controller-7f8b4dd8f9-b7vtv
Namespace:            democratic-csi
Priority:             10000
Priority Class Name:  normal-priority
Service Account:      truenas-democratic-csi-controller-sa
Node:                 kube05/192.168.0.57
Start Time:           Thu, 09 May 2024 17:31:14 +0100
Labels:               app.kubernetes.io/component=controller-linux
                      app.kubernetes.io/csi-role=controller
                      app.kubernetes.io/instance=truenas
                      app.kubernetes.io/managed-by=Helm
                      app.kubernetes.io/name=democratic-csi
                      pod-template-hash=7f8b4dd8f9
Annotations:          checksum/configmap: 7d7e061efe2197ef3a0cbc739a943c2fdfaa234057f24819c0130658a25f3d38
                      checksum/secret: 67f3893bebe879470b8a781d5917ed84e0db546574e058c7b542a0225cf0970e
                      cni.projectcalico.org/containerID: 802ebc87c114527aedec3314e9fcf693ba5229d668b28fe614bca30a99d4541b
                      cni.projectcalico.org/podIP: 10.1.199.109/32
                      cni.projectcalico.org/podIPs: 10.1.199.109/32
Status:               Running
IP:                   10.1.199.109
IPs:
  IP:           10.1.199.109
Controlled By:  ReplicaSet/truenas-democratic-csi-controller-7f8b4dd8f9
Containers:
  external-attacher:
    Container ID:  containerd://e98d213e252c054d4f9bb79a2da42618836a8bfc0a12cc3c2b87a8e3bb07328f
    Image:         registry.k8s.io/sig-storage/csi-attacher:v4.4.0
    Image ID:      registry.k8s.io/sig-storage/csi-attacher@sha256:dbded4ebe299973e31d0ef55815eb07a990fc67872b14900a8e69232acae4f2d
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --leader-election
      --leader-election-namespace=democratic-csi
      --timeout=90s
      --worker-threads=10
      --csi-address=/csi-data/csi.sock
    State:          Running
      Started:      Sun, 02 Jun 2024 23:58:00 +0100
    Last State:     Terminated
      Reason:       Error
      Exit Code:    255
      Started:      Sun, 02 Jun 2024 23:18:00 +0100
      Finished:     Sun, 02 Jun 2024 23:57:59 +0100
    Ready:          True
    Restart Count:  40
    Environment:    <none>
    Mounts:
      /csi-data from socket-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q9pvg (ro)
  external-provisioner:
    Container ID:  containerd://7463343d46d48ed78cdf2cf0f35898aeb2629f417e8198412cf5a343c8f46882
    Image:         registry.k8s.io/sig-storage/csi-provisioner:v3.6.0
    Image ID:      registry.k8s.io/sig-storage/csi-provisioner@sha256:6de50bacfd72627ffe4209e3db8ad7391d04ea6fb741bf51feab6c5dcfd80002
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --leader-election
      --leader-election-namespace=democratic-csi
      --timeout=90s
      --worker-threads=10
      --extra-create-metadata
      --csi-address=/csi-data/csi.sock
    State:          Running
      Started:      Sun, 02 Jun 2024 23:57:58 +0100
    Last State:     Terminated
      Reason:       Error
      Exit Code:    255
      Started:      Sun, 02 Jun 2024 22:33:13 +0100
      Finished:     Sun, 02 Jun 2024 23:57:57 +0100
    Ready:          True
    Restart Count:  43
    Environment:
      NODE_NAME:   (v1:spec.nodeName)
      NAMESPACE:  democratic-csi (v1:metadata.namespace)
      POD_NAME:   truenas-democratic-csi-controller-7f8b4dd8f9-b7vtv (v1:metadata.name)
    Mounts:
      /csi-data from socket-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q9pvg (ro)
  external-resizer:
    Container ID:  containerd://ab883ff59bdb32f114668ab00c32b583ef9ca9082da3aaa24d054d81f03924ce
    Image:         registry.k8s.io/sig-storage/csi-resizer:v1.9.0
    Image ID:      registry.k8s.io/sig-storage/csi-resizer@sha256:f1f352df97874442624fcef23eab04aa038f66cb0e361212f6ec09e92998184d
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --leader-election
      --leader-election-namespace=democratic-csi
      --timeout=90s
      --workers=10
      --csi-address=/csi-data/csi.sock
    State:          Running
      Started:      Sun, 02 Jun 2024 22:33:14 +0100
    Last State:     Terminated
      Reason:       Error
      Exit Code:    255
      Started:      Sun, 02 Jun 2024 22:27:58 +0100
      Finished:     Sun, 02 Jun 2024 22:32:55 +0100
    Ready:          True
    Restart Count:  39
    Environment:
      NODE_NAME:   (v1:spec.nodeName)
      NAMESPACE:  democratic-csi (v1:metadata.namespace)
      POD_NAME:   truenas-democratic-csi-controller-7f8b4dd8f9-b7vtv (v1:metadata.name)
    Mounts:
      /csi-data from socket-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q9pvg (ro)
  external-snapshotter:
    Container ID:  containerd://ee09044fc0cf9b49787b873b1594a5a283d428184d61c78eb7cbd0dcb7a4ce31
    Image:         registry.k8s.io/sig-storage/csi-snapshotter:v6.3.0
    Image ID:      registry.k8s.io/sig-storage/csi-snapshotter@sha256:93fbf54dd67f586428ea339743160e1a5110c058afde3c2e824f11fbc6efae53
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --leader-election
      --leader-election-namespace=democratic-csi
      --timeout=90s
      --worker-threads=10
      --csi-address=/csi-data/csi.sock
    State:          Running
      Started:      Mon, 03 Jun 2024 00:23:02 +0100
    Last State:     Terminated
      Reason:       Error
      Exit Code:    255
      Started:      Sun, 02 Jun 2024 23:58:01 +0100
      Finished:     Mon, 03 Jun 2024 00:22:58 +0100
    Ready:          True
    Restart Count:  40
    Environment:
      NODE_NAME:   (v1:spec.nodeName)
      NAMESPACE:  democratic-csi (v1:metadata.namespace)
      POD_NAME:   truenas-democratic-csi-controller-7f8b4dd8f9-b7vtv (v1:metadata.name)
    Mounts:
      /csi-data from socket-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q9pvg (ro)
  csi-driver:
    Container ID:  containerd://47ca8e4f0cf441facb033827b4424e5e99894b487ca10ad732cc038e8e36974b
    Image:         docker.io/democraticcsi/democratic-csi:latest
    Image ID:      docker.io/democraticcsi/democratic-csi@sha256:1e69f309256266b9bbf35e29f5c427a0ba21bff0da03943caf55e2bdc7558f83
    Port:          <none>
    Host Port:     <none>
    Args:
      --csi-version=1.5.0
      --csi-name=org.democratic-csi.nfs
      --driver-config-file=/config/driver-config-file.yaml
      --log-level=debug
      --csi-mode=controller
      --server-socket=/csi-data/csi.sock.internal
    State:          Running
      Started:      Thu, 09 May 2024 17:31:29 +0100
    Ready:          True
    Restart Count:  0
    Liveness:       exec [bin/liveness-probe --csi-version=1.5.0 --csi-address=/csi-data/csi.sock.internal] delay=10s timeout=15s period=60s #success=1 #failure=3
    Environment:
      NODE_EXTRA_CA_CERTS:  /tmp/certs/extra-ca-certs.crt
    Mounts:
      /config from config (rw)
      /csi-data from socket-dir (rw)
      /tmp/certs from extra-ca-certs (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q9pvg (ro)
  csi-proxy:
    Container ID:   containerd://ee6ab89f1f807fb8314a740607391b421a00237d29140fb83e8aabcc37d3e1e7
    Image:          docker.io/democraticcsi/csi-grpc-proxy:v0.5.6
    Image ID:       docker.io/democraticcsi/csi-grpc-proxy@sha256:fc5fc1fe9c682463fe8b219db070930e77067d76449749ef5ba99cedd30fa437
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 09 May 2024 17:31:29 +0100
    Ready:          True
    Restart Count:  0
    Environment:
      BIND_TO:   unix:///csi-data/csi.sock
      PROXY_TO:  unix:///csi-data/csi.sock.internal
    Mounts:
      /csi-data from socket-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q9pvg (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  socket-dir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  config:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  truenas-democratic-csi-driver-config
    Optional:    false
  extra-ca-certs:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      truenas-democratic-csi
    Optional:  false
  kube-api-access-q9pvg:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>

I got the external-provisioner logs from the controller. The log is huge (30k lines) so I've filtered it to lines that mention jellyfin. It seems to be having a problem where it looks like the volume already exists with a different spec (it doesn't, this is the first time I have created a volume of this name). The log messages repeat over and over.

[jonathan@poseidon jellyfin]$ kubectl logs truenas-democratic-csi-controller-7f8b4dd8f9-b7vtv -c external-provisioner > ~/external-provisioner.log
[jonathan@poseidon jellyfin]$ cat ~/external-provisioner.log  | grep jellyfin
I0602 22:58:15.922055       1 controller.go:1366] provision "jellyfin/jellyfin-media" class "truenas": started
I0602 22:58:15.922208       1 connection.go:194] GRPC request: {"capacity_range":{"required_bytes":1099511627776},"name":"pvc-ce626c4c-1d1b-4571-b7e1-42f9ba17dc52","parameters":{"csi.storage.k8s.io/pv/name":"pvc-ce626c4c-1d1b-4571-b7e1-42f9ba17dc52","csi.storage.k8s.io/pvc/name":"jellyfin-media","csi.storage.k8s.io/pvc/namespace":"jellyfin","fsType":"nfs","mountOptions":"[noatime nfsvers=4]"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"nfs"}},"access_mode":{"mode":5}}]}
I0602 22:58:15.926419       1 controller.go:1152] handleProtectionFinalizer Volume : &PersistentVolume{ObjectMeta:{pvc-735432a5-3fbd-4eee-a664-2be03fa86b41    551e9b8f-7d66-4707-be7a-02ed50d46892 123945727 0 2023-12-08 20:51:59 +0000 UTC <nil> <nil> map[] map[pv.kubernetes.io/provisioned-by:org.democratic-csi.nfs volume.kubernetes.io/provisioner-deletion-secret-name: volume.kubernetes.io/provisioner-deletion-secret-namespace:] [] [kubernetes.io/pv-protection] [{csi-provisioner Update v1 2023-12-08 20:51:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/provisioned-by":{},"f:volume.kubernetes.io/provisioner-deletion-secret-name":{},"f:volume.kubernetes.io/provisioner-deletion-secret-namespace":{}}},"f:spec":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:claimRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:csi":{".":{},"f:driver":{},"f:fsType":{},"f:volumeAttributes":{".":{},"f:node_attach_driver":{},"f:provisioner_driver":{},"f:provisioner_driver_instance_id":{},"f:server":{},"f:share":{},"f:storage.kubernetes.io/csiProvisionerIdentity":{}},"f:volumeHandle":{}},"f:persistentVolumeReclaimPolicy":{},"f:storageClassName":{},"f:volumeMode":{}}} } {kubelite Update v1 2023-12-08 20:51:59 +0000 UTC FieldsV1 {"f:status":{"f:phase":{}}} status}]},Spec:PersistentVolumeSpec{Capacity:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},PersistentVolumeSource:PersistentVolumeSource{GCEPersistentDisk:nil,AWSElasticBlockStore:nil,HostPath:nil,Glusterfs:nil,NFS:nil,RBD:nil,ISCSI:nil,Cinder:nil,CephFS:nil,FC:nil,Flocker:nil,FlexVolume:nil,AzureFile:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Local:nil,StorageOS:nil,CSI:&CSIPersistentVolumeSource{Driver:org.democratic-csi.nfs,VolumeHandle:pvc-735432a5-3fbd-4eee-a664-2be03fa86b41,ReadOnly:false,FSType:nfs,VolumeAttributes:map[string]string{node_attach_driver: nfs,provisioner_driver: freenas-api-nfs,provisioner_driver_instance_id: truenas,server: 192.168.0.2,share: /mnt/ssd/k8s/vols/pvc-735432a5-3fbd-4eee-a664-2be03fa86b41,storage.kubernetes.io/csiProvisionerIdentity: 1702058271810-1550-org.democratic-csi.nfs,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,NodeExpandSecretRef:nil,},},AccessModes:[ReadWriteOnce],ClaimRef:&ObjectReference{Kind:PersistentVolumeClaim,Namespace:jellyfin,Name:jellyfin-config,UID:735432a5-3fbd-4eee-a664-2be03fa86b41,APIVersion:v1,ResourceVersion:123945562,FieldPath:,},PersistentVolumeReclaimPolicy:Delete,StorageClassName:truenas,MountOptions:[],VolumeMode:*Filesystem,NodeAffinity:nil,},Status:PersistentVolumeStatus{Phase:Bound,Message:,Reason:,LastPhaseTransitionTime:<nil>,},}
I0602 22:58:15.930736       1 controller.go:1152] handleProtectionFinalizer Volume : &PersistentVolume{ObjectMeta:{pvc-f7d87d39-bdaa-4034-9508-dfbd231427c2    83eb16d9-2022-44fe-95d1-21987d3175ea 182970266 0 2023-12-08 20:51:59 +0000 UTC <nil> <nil> map[] map[pv.kubernetes.io/provisioned-by:org.democratic-csi.nfs volume.kubernetes.io/provisioner-deletion-secret-name: volume.kubernetes.io/provisioner-deletion-secret-namespace:] [] [kubernetes.io/pv-protection] [{csi-provisioner Update v1 2023-12-08 20:51:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/provisioned-by":{},"f:volume.kubernetes.io/provisioner-deletion-secret-name":{},"f:volume.kubernetes.io/provisioner-deletion-secret-namespace":{}}},"f:spec":{"f:accessModes":{},"f:capacity":{},"f:claimRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:csi":{".":{},"f:driver":{},"f:fsType":{},"f:volumeAttributes":{".":{},"f:node_attach_driver":{},"f:provisioner_driver":{},"f:provisioner_driver_instance_id":{},"f:server":{},"f:share":{},"f:storage.kubernetes.io/csiProvisionerIdentity":{}},"f:volumeHandle":{}},"f:persistentVolumeReclaimPolicy":{},"f:storageClassName":{},"f:volumeMode":{}}} } {kubelite Update v1 2023-12-08 20:51:59 +0000 UTC FieldsV1 {"f:status":{"f:phase":{}}} status} {csi-resizer Update v1 2024-05-14 09:00:17 +0000 UTC FieldsV1 {"f:spec":{"f:capacity":{"f:storage":{}}}} }]},Spec:PersistentVolumeSpec{Capacity:ResourceList{storage: {{8589934592 0} {<nil>}  BinarySI},},PersistentVolumeSource:PersistentVolumeSource{GCEPersistentDisk:nil,AWSElasticBlockStore:nil,HostPath:nil,Glusterfs:nil,NFS:nil,RBD:nil,ISCSI:nil,Cinder:nil,CephFS:nil,FC:nil,Flocker:nil,FlexVolume:nil,AzureFile:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Local:nil,StorageOS:nil,CSI:&CSIPersistentVolumeSource{Driver:org.democratic-csi.nfs,VolumeHandle:pvc-f7d87d39-bdaa-4034-9508-dfbd231427c2,ReadOnly:false,FSType:nfs,VolumeAttributes:map[string]string{node_attach_driver: nfs,provisioner_driver: freenas-api-nfs,provisioner_driver_instance_id: truenas,server: 192.168.0.2,share: /mnt/ssd/k8s/vols/pvc-f7d87d39-bdaa-4034-9508-dfbd231427c2,storage.kubernetes.io/csiProvisionerIdentity: 1702058271810-1550-org.democratic-csi.nfs,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,NodeExpandSecretRef:nil,},},AccessModes:[ReadWriteOnce],ClaimRef:&ObjectReference{Kind:PersistentVolumeClaim,Namespace:jellyfin,Name:jellyfin-data,UID:f7d87d39-bdaa-4034-9508-dfbd231427c2,APIVersion:v1,ResourceVersion:123945561,FieldPath:,},PersistentVolumeReclaimPolicy:Delete,StorageClassName:truenas,MountOptions:[],VolumeMode:*Filesystem,NodeAffinity:nil,},Status:PersistentVolumeStatus{Phase:Bound,Message:,Reason:,LastPhaseTransitionTime:<nil>,},}
I0602 22:58:15.932006       1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"jellyfin", Name:"jellyfin-media", UID:"ce626c4c-1d1b-4571-b7e1-42f9ba17dc52", APIVersion:"v1", ResourceVersion:"194218763", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "jellyfin/jellyfin-media"
I0602 22:58:15.951846       1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"jellyfin", Name:"jellyfin-media", UID:"ce626c4c-1d1b-4571-b7e1-42f9ba17dc52", APIVersion:"v1", ResourceVersion:"194218763", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "truenas": rpc error: code = AlreadyExists desc = volume has already been created with a different size, existing size: -, required_bytes: 1099511627776, limit_bytes: 0
I0602 22:58:16.952295       1 controller.go:1366] provision "jellyfin/jellyfin-media" class "truenas": started
I0602 22:58:16.952633       1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"jellyfin", Name:"jellyfin-media", UID:"ce626c4c-1d1b-4571-b7e1-42f9ba17dc52", APIVersion:"v1", ResourceVersion:"194218763", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "jellyfin/jellyfin-media"
I0602 22:58:16.952600       1 connection.go:194] GRPC request: {"capacity_range":{"required_bytes":1099511627776},"name":"pvc-ce626c4c-1d1b-4571-b7e1-42f9ba17dc52","parameters":{"csi.storage.k8s.io/pv/name":"pvc-ce626c4c-1d1b-4571-b7e1-42f9ba17dc52","csi.storage.k8s.io/pvc/name":"jellyfin-media","csi.storage.k8s.io/pvc/namespace":"jellyfin","fsType":"nfs","mountOptions":"[noatime nfsvers=4]"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"nfs"}},"access_mode":{"mode":5}}]}
I0602 22:58:16.965488       1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"jellyfin", Name:"jellyfin-media", UID:"ce626c4c-1d1b-4571-b7e1-42f9ba17dc52", APIVersion:"v1", ResourceVersion:"194218763", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "truenas": rpc error: code = AlreadyExists desc = volume has already been created with a different size, existing size: -, required_bytes: 1099511627776, limit_bytes: 0
I0602 22:58:18.965914       1 controller.go:1366] provision "jellyfin/jellyfin-media" class "truenas": started
I0602 22:58:18.966063       1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"jellyfin", Name:"jellyfin-media", UID:"ce626c4c-1d1b-4571-b7e1-42f9ba17dc52", APIVersion:"v1", ResourceVersion:"194218763", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "jellyfin/jellyfin-media"
I0602 22:58:18.966101       1 connection.go:194] GRPC request: {"capacity_range":{"required_bytes":1099511627776},"name":"pvc-ce626c4c-1d1b-4571-b7e1-42f9ba17dc52","parameters":{"csi.storage.k8s.io/pv/name":"pvc-ce626c4c-1d1b-4571-b7e1-42f9ba17dc52","csi.storage.k8s.io/pvc/name":"jellyfin-media","csi.storage.k8s.io/pvc/namespace":"jellyfin","fsType":"nfs","mountOptions":"[noatime nfsvers=4]"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"nfs"}},"access_mode":{"mode":5}}]}
I0602 22:58:18.971664       1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"jellyfin", Name:"jellyfin-media", UID:"ce626c4c-1d1b-4571-b7e1-42f9ba17dc52", APIVersion:"v1", ResourceVersion:"194218763", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "truenas": rpc error: code = AlreadyExists desc = volume has already been created with a different size, existing size: -, required_bytes: 1099511627776, limit_bytes: 0
I0602 22:58:22.972269       1 controller.go:1366] provision "jellyfin/jellyfin-media" class "truenas": started
I0602 22:58:22.972534       1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"jellyfin", Name:"jellyfin-media", UID:"ce626c4c-1d1b-4571-b7e1-42f9ba17dc52", APIVersion:"v1", ResourceVersion:"194218763", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "jellyfin/jellyfin-media"
I0602 22:58:22.972492       1 connection.go:194] GRPC request: {"capacity_range":{"required_bytes":1099511627776},"name":"pvc-ce626c4c-1d1b-4571-b7e1-42f9ba17dc52","parameters":{"csi.storage.k8s.io/pv/name":"pvc-ce626c4c-1d1b-4571-b7e1-42f9ba17dc52","csi.storage.k8s.io/pvc/name":"jellyfin-media","csi.storage.k8s.io/pvc/namespace":"jellyfin","fsType":"nfs","mountOptions":"[noatime nfsvers=4]"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"nfs"}},"access_mode":{"mode":5}}]}
I0602 22:58:22.984338       1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"jellyfin", Name:"jellyfin-media", UID:"ce626c4c-1d1b-4571-b7e1-42f9ba17dc52", APIVersion:"v1", ResourceVersion:"194218763", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "truenas": rpc error: code = AlreadyExists desc = volume has already been created with a different size, existing size: -, required_bytes: 1099511627776, limit_bytes: 0
I0602 22:58:30.984791       1 controller.go:1366] provision "jellyfin/jellyfin-media" class "truenas": started
I0602 22:58:30.985019       1 connection.go:194] GRPC request: {"capacity_range":{"required_bytes":1099511627776},"name":"pvc-ce626c4c-1d1b-4571-b7e1-42f9ba17dc52","parameters":{"csi.storage.k8s.io/pv/name":"pvc-ce626c4c-1d1b-4571-b7e1-42f9ba17dc52","csi.storage.k8s.io/pvc/name":"jellyfin-media","csi.storage.k8s.io/pvc/namespace":"jellyfin","fsType":"nfs","mountOptions":"[noatime nfsvers=4]"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"nfs"}},"access_mode":{"mode":5}}]}
I0602 22:58:30.985324       1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"jellyfin", Name:"jellyfin-media", UID:"ce626c4c-1d1b-4571-b7e1-42f9ba17dc52", APIVersion:"v1", ResourceVersion:"194218763", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "jellyfin/jellyfin-media"
I0602 22:58:30.998976       1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"jellyfin", Name:"jellyfin-media", UID:"ce626c4c-1d1b-4571-b7e1-42f9ba17dc52", APIVersion:"v1", ResourceVersion:"194218763", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "truenas": rpc error: code = AlreadyExists desc = volume has already been created with a different size, existing size: -, required_bytes: 1099511627776, limit_bytes: 0
I0602 22:58:46.999895       1 controller.go:1366] provision "jellyfin/jellyfin-media" class "truenas": started
I0602 22:58:47.000060       1 connection.go:194] GRPC request: {"capacity_range":{"required_bytes":1099511627776},"name":"pvc-ce626c4c-1d1b-4571-b7e1-42f9ba17dc52","parameters":{"csi.storage.k8s.io/pv/name":"pvc-ce626c4c-1d1b-4571-b7e1-42f9ba17dc52","csi.storage.k8s.io/pvc/name":"jellyfin-media","csi.storage.k8s.io/pvc/namespace":"jellyfin","fsType":"nfs","mountOptions":"[noatime nfsvers=4]"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"nfs"}},"access_mode":{"mode":5}}]}
I0602 22:58:47.000293       1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"jellyfin", Name:"jellyfin-media", UID:"ce626c4c-1d1b-4571-b7e1-42f9ba17dc52", APIVersion:"v1", ResourceVersion:"194218763", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "jellyfin/jellyfin-media"
I0602 22:58:47.016559       1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"jellyfin", Name:"jellyfin-media", UID:"ce626c4c-1d1b-4571-b7e1-42f9ba17dc52", APIVersion:"v1", ResourceVersion:"194218763", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "truenas": rpc error: code = AlreadyExists desc = volume has already been created with a different size, existing size: -, required_bytes: 1099511627776, limit_bytes: 0
I0602 22:59:19.016979       1 controller.go:1366] provision "jellyfin/jellyfin-media" class "truenas": started
I0602 22:59:19.017280       1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"jellyfin", Name:"jellyfin-media", UID:"ce626c4c-1d1b-4571-b7e1-42f9ba17dc52", APIVersion:"v1", ResourceVersion:"194218763", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "jellyfin/jellyfin-media"
I0602 22:59:19.017167       1 connection.go:194] GRPC request: {"capacity_range":{"required_bytes":1099511627776},"name":"pvc-ce626c4c-1d1b-4571-b7e1-42f9ba17dc52","parameters":{"csi.storage.k8s.io/pv/name":"pvc-ce626c4c-1d1b-4571-b7e1-42f9ba17dc52","csi.storage.k8s.io/pvc/name":"jellyfin-media","csi.storage.k8s.io/pvc/namespace":"jellyfin","fsType":"nfs","mountOptions":"[noatime nfsvers=4]"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"nfs"}},"access_mode":{"mode":5}}]}
I0602 22:59:19.031267       1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"jellyfin", Name:"jellyfin-media", UID:"ce626c4c-1d1b-4571-b7e1-42f9ba17dc52", APIVersion:"v1", ResourceVersion:"194218763", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "truenas": rpc error: code = AlreadyExists desc = volume has already been created with a different size, existing size: -, required_bytes: 1099511627776, limit_bytes: 0
I0602 23:00:23.032027       1 controller.go:1366] provision "jellyfin/jellyfin-media" class "truenas": started
I0602 23:00:23.032501       1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"jellyfin", Name:"jellyfin-media", UID:"ce626c4c-1d1b-4571-b7e1-42f9ba17dc52", APIVersion:"v1", ResourceVersion:"194218763", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "jellyfin/jellyfin-media"
I0602 23:00:23.032191       1 connection.go:194] GRPC request: {"capacity_range":{"required_bytes":1099511627776},"name":"pvc-ce626c4c-1d1b-4571-b7e1-42f9ba17dc52","parameters":{"csi.storage.k8s.io/pv/name":"pvc-ce626c4c-1d1b-4571-b7e1-42f9ba17dc52","csi.storage.k8s.io/pvc/name":"jellyfin-media","csi.storage.k8s.io/pvc/namespace":"jellyfin","fsType":"nfs","mountOptions":"[noatime nfsvers=4]"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"nfs"}},"access_mode":{"mode":5}}]}
I0602 23:00:23.047329       1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"jellyfin", Name:"jellyfin-media", UID:"ce626c4c-1d1b-4571-b7e1-42f9ba17dc52", APIVersion:"v1", ResourceVersion:"194218763", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "truenas": rpc error: code = AlreadyExists desc = volume has already been created with a different size, existing size: -, required_bytes: 1099511627776, limit_bytes: 0
I0602 23:02:31.048116       1 controller.go:1366] provision "jellyfin/jellyfin-media" class "truenas": started
I0602 23:02:31.048539       1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"jellyfin", Name:"jellyfin-media", UID:"ce626c4c-1d1b-4571-b7e1-42f9ba17dc52", APIVersion:"v1", ResourceVersion:"194218763", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "jellyfin/jellyfin-media"
I0602 23:02:31.048595       1 connection.go:194] GRPC request: {"capacity_range":{"required_bytes":1099511627776},"name":"pvc-ce626c4c-1d1b-4571-b7e1-42f9ba17dc52","parameters":{"csi.storage.k8s.io/pv/name":"pvc-ce626c4c-1d1b-4571-b7e1-42f9ba17dc52","csi.storage.k8s.io/pvc/name":"jellyfin-media","csi.storage.k8s.io/pvc/namespace":"jellyfin","fsType":"nfs","mountOptions":"[noatime nfsvers=4]"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"nfs"}},"access_mode":{"mode":5}}]}
I0602 23:02:31.054178       1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"jellyfin", Name:"jellyfin-media", UID:"ce626c4c-1d1b-4571-b7e1-42f9ba17dc52", APIVersion:"v1", ResourceVersion:"194218763", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "truenas": rpc error: code = AlreadyExists desc = volume has already been created with a different size, existing size: -, required_bytes: 1099511627776, limit_bytes: 0
I0602 23:06:47.054292       1 controller.go:1366] provision "jellyfin/jellyfin-media" class "truenas": started
I0602 23:06:47.054426       1 connection.go:194] GRPC request: {"capacity_range":{"required_bytes":1099511627776},"name":"pvc-ce626c4c-1d1b-4571-b7e1-42f9ba17dc52","parameters":{"csi.storage.k8s.io/pv/name":"pvc-ce626c4c-1d1b-4571-b7e1-42f9ba17dc52","csi.storage.k8s.io/pvc/name":"jellyfin-media","csi.storage.k8s.io/pvc/namespace":"jellyfin","fsType":"nfs","mountOptions":"[noatime nfsvers=4]"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"nfs"}},"access_mode":{"mode":5}}]}
I0602 23:06:47.054573       1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"jellyfin", Name:"jellyfin-media", UID:"ce626c4c-1d1b-4571-b7e1-42f9ba17dc52", APIVersion:"v1", ResourceVersion:"194218763", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "jellyfin/jellyfin-media"
I0602 23:06:47.061722       1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"jellyfin", Name:"jellyfin-media", UID:"ce626c4c-1d1b-4571-b7e1-42f9ba17dc52", APIVersion:"v1", ResourceVersion:"194218763", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "truenas": rpc error: code = AlreadyExists desc = volume has already been created with a different size, existing size: -, required_bytes: 1099511627776, limit_bytes: 0
I0602 23:11:47.062130       1 controller.go:1366] provision "jellyfin/jellyfin-media" class "truenas": started
I0602 23:11:47.062452       1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"jellyfin", Name:"jellyfin-media", UID:"ce626c4c-1d1b-4571-b7e1-42f9ba17dc52", APIVersion:"v1", ResourceVersion:"194218763", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "jellyfin/jellyfin-media"
I0602 23:11:47.062341       1 connection.go:194] GRPC request: {"capacity_range":{"required_bytes":1099511627776},"name":"pvc-ce626c4c-1d1b-4571-b7e1-42f9ba17dc52","parameters":{"csi.storage.k8s.io/pv/name":"pvc-ce626c4c-1d1b-4571-b7e1-42f9ba17dc52","csi.storage.k8s.io/pvc/name":"jellyfin-media","csi.storage.k8s.io/pvc/namespace":"jellyfin","fsType":"nfs","mountOptions":"[noatime nfsvers=4]"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"nfs"}},"access_mode":{"mode":5}}]}
I0602 23:11:47.076376       1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"jellyfin", Name:"jellyfin-media", UID:"ce626c4c-1d1b-4571-b7e1-42f9ba17dc52", APIVersion:"v1", ResourceVersion:"194218763", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "truenas": rpc error: code = AlreadyExists desc = volume has already been created with a different size, existing size: -, required_bytes: 1099511627776, limit_bytes: 0

In the log I also noticed various bad-looking error messages about deleting volumes:

I0603 08:56:51.074886       1 controller.go:1152] handleProtectionFinalizer Volume : &PersistentVolume{ObjectMeta:{pvc-be9cae9a-cc12-4caa-8605-997609cd3070    9853471e-1c8d-4cdb-a176-56d39045011e 180295056 0 2024-05-09 16:17:53 +0000 UTC <nil> <nil> map[] map[pv.kubernetes.io/provisioned-by:org.democratic-csi.nfs volume.kubernetes.io/provisioner-deletion-secret-name: volume.kubernetes.io/provisioner-deletion-secret-namespace:] [] [kubernetes.io/pv-protection] [{csi-provisioner Update v1 2024-05-09 16:17:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/provisioned-by":{},"f:volume.kubernetes.io/provisioner-deletion-secret-name":{},"f:volume.kubernetes.io/provisioner-deletion-secret-namespace":{}}},"f:spec":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:claimRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:csi":{".":{},"f:driver":{},"f:fsType":{},"f:volumeAttributes":{".":{},"f:node_attach_driver":{},"f:provisioner_driver":{},"f:provisioner_driver_instance_id":{},"f:server":{},"f:share":{},"f:storage.kubernetes.io/csiProvisionerIdentity":{}},"f:volumeHandle":{}},"f:persistentVolumeReclaimPolicy":{},"f:storageClassName":{},"f:volumeMode":{}}} } {kubelite Update v1 2024-05-09 16:18:36 +0000 UTC FieldsV1 {"f:status":{"f:phase":{}}} status}]},Spec:PersistentVolumeSpec{Capacity:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},PersistentVolumeSource:PersistentVolumeSource{GCEPersistentDisk:nil,AWSElasticBlockStore:nil,HostPath:nil,Glusterfs:nil,NFS:nil,RBD:nil,ISCSI:nil,Cinder:nil,CephFS:nil,FC:nil,Flocker:nil,FlexVolume:nil,AzureFile:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Local:nil,StorageOS:nil,CSI:&CSIPersistentVolumeSource{Driver:org.democratic-csi.nfs,VolumeHandle:pvc-be9cae9a-cc12-4caa-8605-997609cd3070,ReadOnly:false,FSType:nfs,VolumeAttributes:map[string]string{node_attach_driver: nfs,provisioner_driver: freenas-api-nfs,provisioner_driver_instance_id: truenas,server: 192.168.0.2,share: /mnt/ssd/k8s/vols/pvc-be9cae9a-cc12-4caa-8605-997609cd3070,storage.kubernetes.io/csiProvisionerIdentity: 1715270803271-9014-org.democratic-csi.nfs,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,NodeExpandSecretRef:nil,},},AccessModes:[ReadWriteOnce],ClaimRef:&ObjectReference{Kind:PersistentVolumeClaim,Namespace:velero,Name:bookstack-20240509161714-qv2th,UID:be9cae9a-cc12-4caa-8605-997609cd3070,APIVersion:v1,ResourceVersion:180294425,FieldPath:,},PersistentVolumeReclaimPolicy:Delete,StorageClassName:truenas,MountOptions:[],VolumeMode:*Filesystem,NodeAffinity:nil,},Status:PersistentVolumeStatus{Phase:Released,Message:,Reason:,LastPhaseTransitionTime:2024-05-09 16:18:36 +0000 UTC,},}
I0603 08:56:51.075024       1 controller.go:1239] shouldDelete volume "pvc-be9cae9a-cc12-4caa-8605-997609cd3070"
I0603 08:56:51.075032       1 controller.go:1269] shouldDelete volume "pvc-be9cae9a-cc12-4caa-8605-997609cd3070" is true
I0603 08:56:51.075037       1 controller.go:1113] shouldDelete Volume: "pvc-be9cae9a-cc12-4caa-8605-997609cd3070"
I0603 08:56:51.075044       1 controller.go:1509] delete "pvc-be9cae9a-cc12-4caa-8605-997609cd3070": started
I0603 08:56:51.075053       1 controller.go:1279] volume pvc-be9cae9a-cc12-4caa-8605-997609cd3070 does not need any deletion secrets
I0603 08:56:51.075065       1 connection.go:193] GRPC call: /csi.v1.Controller/DeleteVolume
I0603 08:56:51.075070       1 connection.go:194] GRPC request: {"volume_id":"pvc-be9cae9a-cc12-4caa-8605-997609cd3070"}
I0603 08:56:51.080089       1 controller.go:1152] handleProtectionFinalizer Volume : &PersistentVolume{ObjectMeta:{pvc-1386fa0d-0073-4d39-8b6c-2ac77a09e74d    26db9a7e-ae28-4ac5-acab-8cda5ff00ce4 181087012 0 2024-05-11 01:18:39 +0000 UTC <nil> <nil> map[] map[pv.kubernetes.io/provisioned-by:org.democratic-csi.nfs volume.kubernetes.io/provisioner-deletion-secret-name: volume.kubernetes.io/provisioner-deletion-secret-namespace:] [] [kubernetes.io/pv-protection] [{csi-provisioner Update v1 2024-05-11 01:18:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/provisioned-by":{},"f:volume.kubernetes.io/provisioner-deletion-secret-name":{},"f:volume.kubernetes.io/provisioner-deletion-secret-namespace":{}}},"f:spec":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:claimRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:csi":{".":{},"f:driver":{},"f:fsType":{},"f:volumeAttributes":{".":{},"f:node_attach_driver":{},"f:provisioner_driver":{},"f:provisioner_driver_instance_id":{},"f:server":{},"f:share":{},"f:storage.kubernetes.io/csiProvisionerIdentity":{}},"f:volumeHandle":{}},"f:persistentVolumeReclaimPolicy":{},"f:storageClassName":{},"f:volumeMode":{}}} } {kubelite Update v1 2024-05-11 01:19:04 +0000 UTC FieldsV1 {"f:status":{"f:phase":{}}} status}]},Spec:PersistentVolumeSpec{Capacity:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},PersistentVolumeSource:PersistentVolumeSource{GCEPersistentDisk:nil,AWSElasticBlockStore:nil,HostPath:nil,Glusterfs:nil,NFS:nil,RBD:nil,ISCSI:nil,Cinder:nil,CephFS:nil,FC:nil,Flocker:nil,FlexVolume:nil,AzureFile:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Local:nil,StorageOS:nil,CSI:&CSIPersistentVolumeSource{Driver:org.democratic-csi.nfs,VolumeHandle:pvc-1386fa0d-0073-4d39-8b6c-2ac77a09e74d,ReadOnly:false,FSType:nfs,VolumeAttributes:map[string]string{node_attach_driver: nfs,provisioner_driver: freenas-api-nfs,provisioner_driver_instance_id: truenas,server: 192.168.0.2,share: /mnt/ssd/k8s/vols/pvc-1386fa0d-0073-4d39-8b6c-2ac77a09e74d,storage.kubernetes.io/csiProvisionerIdentity: 1715272731617-9265-org.democratic-csi.nfs,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,NodeExpandSecretRef:nil,},},AccessModes:[ReadWriteOnce],ClaimRef:&ObjectReference{Kind:PersistentVolumeClaim,Namespace:velero,Name:bookstack-20240511011803-vmnfl,UID:1386fa0d-0073-4d39-8b6c-2ac77a09e74d,APIVersion:v1,ResourceVersion:181086582,FieldPath:,},PersistentVolumeReclaimPolicy:Delete,StorageClassName:truenas,MountOptions:[],VolumeMode:*Filesystem,NodeAffinity:nil,},Status:PersistentVolumeStatus{Phase:Released,Message:,Reason:,LastPhaseTransitionTime:2024-05-11 01:19:04 +0000 UTC,},}
I0603 08:56:51.080215       1 controller.go:1239] shouldDelete volume "pvc-1386fa0d-0073-4d39-8b6c-2ac77a09e74d"
I0603 08:56:51.080222       1 controller.go:1269] shouldDelete volume "pvc-1386fa0d-0073-4d39-8b6c-2ac77a09e74d" is true
I0603 08:56:51.080228       1 controller.go:1113] shouldDelete Volume: "pvc-1386fa0d-0073-4d39-8b6c-2ac77a09e74d"
I0603 08:56:51.080234       1 controller.go:1509] delete "pvc-1386fa0d-0073-4d39-8b6c-2ac77a09e74d": started
I0603 08:56:51.080244       1 controller.go:1279] volume pvc-1386fa0d-0073-4d39-8b6c-2ac77a09e74d does not need any deletion secrets
I0603 08:56:51.080257       1 connection.go:193] GRPC call: /csi.v1.Controller/DeleteVolume
I0603 08:56:51.080352       1 connection.go:194] GRPC request: {"volume_id":"pvc-1386fa0d-0073-4d39-8b6c-2ac77a09e74d"}
I0603 08:56:51.090349       1 connection.go:200] GRPC response: {}
I0603 08:56:51.090361       1 connection.go:201] GRPC error: rpc error: code = Internal desc = Error: "<html>\r\n<head><title>405 Not Allowed</title></head>\r\n<body>\r\n<center><h1>405 Not Allowed</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n" Error: "<html>\r\n<head><title>405 Not Allowed</title></head>\r\n<body>\r\n<center><h1>405 Not Allowed</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n"
    at Api.DatasetDelete (/home/csi/app/src/driver/freenas/http/api.js:423:11)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async /home/csi/app/src/driver/freenas/api.js:3102:11
    at async Object.retry (/home/csi/app/src/utils/general.js:217:14)
    at async FreeNASApiDriver.DeleteVolume (/home/csi/app/src/driver/freenas/api.js:3098:7)
    at async requestHandlerProxy (/home/csi/app/bin/democratic-csi:222:18)
E0603 08:56:51.090377       1 controller.go:1519] delete "pvc-be9cae9a-cc12-4caa-8605-997609cd3070": volume deletion failed: rpc error: code = Internal desc = Error: "<html>\r\n<head><title>405 Not Allowed</title></head>\r\n<body>\r\n<center><h1>405 Not Allowed</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n" Error: "<html>\r\n<head><title>405 Not Allowed</title></head>\r\n<body>\r\n<center><h1>405 Not Allowed</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n"
    at Api.DatasetDelete (/home/csi/app/src/driver/freenas/http/api.js:423:11)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async /home/csi/app/src/driver/freenas/api.js:3102:11
    at async Object.retry (/home/csi/app/src/utils/general.js:217:14)
    at async FreeNASApiDriver.DeleteVolume (/home/csi/app/src/driver/freenas/api.js:3098:7)
    at async requestHandlerProxy (/home/csi/app/bin/democratic-csi:222:18)
W0603 08:56:51.090394       1 controller.go:989] Retrying syncing volume "pvc-be9cae9a-cc12-4caa-8605-997609cd3070", failure 166
E0603 08:56:51.090413       1 controller.go:1007] error syncing volume "pvc-be9cae9a-cc12-4caa-8605-997609cd3070": rpc error: code = Internal desc = Error: "<html>\r\n<head><title>405 Not Allowed</title></head>\r\n<body>\r\n<center><h1>405 Not Allowed</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n" Error: "<html>\r\n<head><title>405 Not Allowed</title></head>\r\n<body>\r\n<center><h1>405 Not Allowed</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n"
    at Api.DatasetDelete (/home/csi/app/src/driver/freenas/http/api.js:423:11)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async /home/csi/app/src/driver/freenas/api.js:3102:11
    at async Object.retry (/home/csi/app/src/utils/general.js:217:14)
    at async FreeNASApiDriver.DeleteVolume (/home/csi/app/src/driver/freenas/api.js:3098:7)
    at async requestHandlerProxy (/home/csi/app/bin/democratic-csi:222:18)
I0603 08:56:51.090683       1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolume", Namespace:"", Name:"pvc-be9cae9a-cc12-4caa-8605-997609cd3070", UID:"9853471e-1c8d-4cdb-a176-56d39045011e", APIVersion:"v1", ResourceVersion:"180295056", FieldPath:""}): type: 'Warning' reason: 'VolumeFailedDelete' rpc error: code = Internal desc = Error: "<html>\r\n<head><title>405 Not Allowed</title></head>\r\n<body>\r\n<center><h1>405 Not Allowed</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n" Error: "<html>\r\n<head><title>405 Not Allowed</title></head>\r\n<body>\r\n<center><h1>405 Not Allowed</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n"
    at Api.DatasetDelete (/home/csi/app/src/driver/freenas/http/api.js:423:11)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async /home/csi/app/src/driver/freenas/api.js:3102:11
    at async Object.retry (/home/csi/app/src/utils/general.js:217:14)
    at async FreeNASApiDriver.DeleteVolume (/home/csi/app/src/driver/freenas/api.js:3098:7)
    at async requestHandlerProxy (/home/csi/app/bin/democratic-csi:222:18)
I0603 08:56:51.094740       1 connection.go:200] GRPC response: {}
I0603 08:56:51.094751       1 connection.go:201] GRPC error: rpc error: code = Internal desc = Error: "<html>\r\n<head><title>405 Not Allowed</title></head>\r\n<body>\r\n<center><h1>405 Not Allowed</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n" Error: "<html>\r\n<head><title>405 Not Allowed</title></head>\r\n<body>\r\n<center><h1>405 Not Allowed</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n"
    at Api.DatasetDelete (/home/csi/app/src/driver/freenas/http/api.js:423:11)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async /home/csi/app/src/driver/freenas/api.js:3102:11
    at async Object.retry (/home/csi/app/src/utils/general.js:217:14)
    at async FreeNASApiDriver.DeleteVolume (/home/csi/app/src/driver/freenas/api.js:3098:7)
    at async requestHandlerProxy (/home/csi/app/bin/democratic-csi:222:18)
E0603 08:56:51.094771       1 controller.go:1519] delete "pvc-1386fa0d-0073-4d39-8b6c-2ac77a09e74d": volume deletion failed: rpc error: code = Internal desc = Error: "<html>\r\n<head><title>405 Not Allowed</title></head>\r\n<body>\r\n<center><h1>405 Not Allowed</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n" Error: "<html>\r\n<head><title>405 Not Allowed</title></head>\r\n<body>\r\n<center><h1>405 Not Allowed</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n"
    at Api.DatasetDelete (/home/csi/app/src/driver/freenas/http/api.js:423:11)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async /home/csi/app/src/driver/freenas/api.js:3102:11
    at async Object.retry (/home/csi/app/src/utils/general.js:217:14)
    at async FreeNASApiDriver.DeleteVolume (/home/csi/app/src/driver/freenas/api.js:3098:7)
    at async requestHandlerProxy (/home/csi/app/bin/democratic-csi:222:18)
W0603 08:56:51.094788       1 controller.go:989] Retrying syncing volume "pvc-1386fa0d-0073-4d39-8b6c-2ac77a09e74d", failure 166
I0603 08:56:51.094837       1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolume", Namespace:"", Name:"pvc-1386fa0d-0073-4d39-8b6c-2ac77a09e74d", UID:"26db9a7e-ae28-4ac5-acab-8cda5ff00ce4", APIVersion:"v1", ResourceVersion:"181087012", FieldPath:""}): type: 'Warning' reason: 'VolumeFailedDelete' rpc error: code = Internal desc = Error: "<html>\r\n<head><title>405 Not Allowed</title></head>\r\n<body>\r\n<center><h1>405 Not Allowed</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n" Error: "<html>\r\n<head><title>405 Not Allowed</title></head>\r\n<body>\r\n<center><h1>405 Not Allowed</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n"
    at Api.DatasetDelete (/home/csi/app/src/driver/freenas/http/api.js:423:11)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async /home/csi/app/src/driver/freenas/api.js:3102:11
    at async Object.retry (/home/csi/app/src/utils/general.js:217:14)
    at async FreeNASApiDriver.DeleteVolume (/home/csi/app/src/driver/freenas/api.js:3098:7)
    at async requestHandlerProxy (/home/csi/app/bin/democratic-csi:222:18)
E0603 08:56:51.094863       1 controller.go:1007] error syncing volume "pvc-1386fa0d-0073-4d39-8b6c-2ac77a09e74d": rpc error: code = Internal desc = Error: "<html>\r\n<head><title>405 Not Allowed</title></head>\r\n<body>\r\n<center><h1>405 Not Allowed</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n" Error: "<html>\r\n<head><title>405 Not Allowed</title></head>\r\n<body>\r\n<center><h1>405 Not Allowed</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n"
    at Api.DatasetDelete (/home/csi/app/src/driver/freenas/http/api.js:423:11)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async /home/csi/app/src/driver/freenas/api.js:3102:11
    at async Object.retry (/home/csi/app/src/utils/general.js:217:14)
    at async FreeNASApiDriver.DeleteVolume (/home/csi/app/src/driver/freenas/api.js:3098:7)
    at async requestHandlerProxy (/home/csi/app/bin/democratic-csi:222:18)

I had a look at pvc-1386fa0d-0073-4d39-8b6c-2ac77a09e74d and it is a temporary volume created by Velero. Basically I think Velero works by taking a snapshot of a volume, spawning a new volume from the snapshot and then performing an incremental backup from the files in the volume. At the end, it deletes the temporary volume, but these ones seem to be stuck despite the reclaim policy being Delete.

[jonathan@poseidon ~]$ kubectl get pv pvc-1386fa0d-0073-4d39-8b6c-2ac77a09e74d
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                                   STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pvc-1386fa0d-0073-4d39-8b6c-2ac77a09e74d   1Gi        RWO            Delete           Released   velero/bookstack-20240511011803-vmnfl   truenas        <unset>                          23d

I hope this is enough info to go on - please let me know if you need any more logs. I guess this is actually two issues, intertwined in the same log, but both are reuslting in TrueNAS API errors and are worth looking at.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants