Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrating parent dataset to a different pool in TrueNAS #391

Closed
shikharbhardwaj opened this issue May 1, 2024 · 5 comments
Closed

Migrating parent dataset to a different pool in TrueNAS #391

shikharbhardwaj opened this issue May 1, 2024 · 5 comments

Comments

@shikharbhardwaj
Copy link

Hello!

I am using democratic-csi to manage TrueNAS-backed PVs for my k8s cluster. Running 3 variants of the helm chart, one each for NFS, iSCSI and manual PVs. Everything has been pretty smooth so far, thanks for this excellent piece of software!

I now have the need to migrate the PVs onto a different pool, to expand available storage. I have begun the migration with the iSCSI PVs first. I replicated the existing datasets into the new pool and then updated the helm chart to point to the new parent dataset.

Attaching the helm chart values here (fields that I changed are marked as UPDATED:

Helm values.yaml
csiDriver:
  name: "org.democratic-csi.iscsi"

storageClasses:
- name: freenas-iscsi-csi
  defaultClass: false
  reclaimPolicy: Delete
  volumeBindingMode: Immediate
  allowVolumeExpansion: true
  parameters:
    fsType: ext4

volumeSnapshotClasses: []

driver:
  config:
    driver: freenas-api-iscsi
    instance_id: truenas
    httpConnection:
      protocol: https
      host: "{{ truenas_ip }}"
      port: 443
      apiKey: "{{ truenas_api_key }}"
      allowInsecure: true
    zfs:
      datasetParentName: main-pool/live/winnipeg/a/vols # --- UPDATED ---
      detachedSnapshotsDatasetParentName: main-pool/live/winnipeg/a/snaps # --- UPDATED ---
      datasetEnableQuotas: true
      datasetEnableReservation: false
      datasetPermissionsMode: "0777"
      datasetPermissionsUser: 1000
      datasetPermissionsGroup: 1000
    iscsi:
      targetPortal: "{{ truenas_ip }}"

      namePrefix: csi-winnipeg-live- # --- UPDATED ---
      nameSuffix: "-clustera"

      targetGroups:
        - targetGroupPortalGroup: 1
          targetGroupInitiatorGroup: 1
          targetGroupAuthType: None

      extentInsecureTpc: true
      extentXenCompat: false
      extentDisablePhysicalBlocksize: true
      extentBlocksize: 512
      extentRpm: "7200"
      extentAvailThreshold: 0

After this, I restarted all democratic-csi pods and then restarted all statefulsets which mount these PVs. The pods came up successfully, but when I looked at the iSCSI shares in the TrueNAS GUI, I did not find new shares getting created for the new pool. It looks like the PVs are still using the old datasets. How can I get these PVs to use the new pool?

Thank you

@rouke-broersma
Copy link

The pv ultimately decides what the backing storage looks like. The settings you set in the csi driver are more like a template for the dynamic provisioning, so these will apply to new dynamic volumes but not to the existing ones. You will need to modify the pv's (not sure that's possible though) to point to the new storage location.

However why don't you just resize your existing vdevs instead of creating a new pool?

@shikharbhardwaj
Copy link
Author

You will need to modify the pv's (not sure that's possible though) to point to the new storage location.

Ah okay, that seems to be a bit more involved that I thought. Will need to rethink the migration steps, maybe I can use snapshots to restore states on to new PVs in the new pool.

However why don't you just resize your existing vdevs instead of creating a new pool?

I am switching the vdev layout for my primary pool (going from RAIDZ1 to mirrored pairs) for easier expansion

@shikharbhardwaj
Copy link
Author

shikharbhardwaj commented May 5, 2024

I tried restoring zfs snapshot using zfs send | recv on a new zvol created on the new pool by democratic-csi, but ran into this error:

> zfs send <original_zvol>@<snapshot> | zfs recv -F <new_zvol>
cannot receive new filesystem stream: zfs receive -F cannot be used to destroy an encrypted filesystem or overwrite an unencrypted one with an encrypted one

So it looks like I have create to the zvols from the snapshot from scratch.

After some reading on similar issues (#289, #300), it looks like specifying the underlying storage is not supported. So it looks like I can either:

  • Use the unsupported idTemplate (ref) and restore the old zvols to new names matching this template and then create new PVCs which picks up these zvols.
  • Manually create new zvols and IQNs and update each PV with this info.

Unless I am missing something, restoring democratic-csi backed PVs from backup seems to be a very involved process.

@shikharbhardwaj
Copy link
Author

Just circling back here to report that the migration was successful with the update to idTemplate. I was able to move all zvols/datasets to the new pool and recreate the PVs in the cluster which picked up the new ones on the new pool. I believe the main problem with the modified idTemplate is that it may exceed the length limits volume name, but I believe in my use case that is not problem.
Linking a couple of scripts I created to help with this process: https://gist.github.com/shikharbhardwaj/c7361f7a0015255388148b8f5ef6204b

@travisghansen
Copy link
Member

Thanks for sharing! A couple other relevant resources:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants