Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature request] Add support for custom path for snapshots #562

Closed
smuda opened this issue Dec 11, 2023 · 6 comments
Closed

[Feature request] Add support for custom path for snapshots #562

smuda opened this issue Dec 11, 2023 · 6 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@smuda
Copy link

smuda commented Dec 11, 2023

Is your feature request related to a problem?/Why is this needed
When creating snapshots they are created at the NFS root, with names that is unclear what volume claim that is the origin of the snapshot. It's also problematic after hundreds of snapshots to have them all in the same folder as the volumes.

Describe the solution you'd like in detail
For the volumes I can specify the subpath in StorageClass.parameters.subdir with templating, for example ${pvc.metadata.namespace}/pvc-${pvc.metadata.name}

I'd propose using the same model for VolumeSnapshotClass.parameters.subdir with support for at least vs.metadata.name, vs.metadata.namespace and vs.spec.source.persistentVolumeClaimName.

@andyzhangx
Copy link
Member

@smuda the *.tar.gz already contains the pv name, check below logs, is that enough?

[pod/csi-test-controller-9784fb589-nprbv/nfs] I1213 02:44:38.584541       1 utils.go:108] GRPC call: /csi.v1.Controller/CreateSnapshot
[pod/csi-test-controller-9784fb589-nprbv/nfs] I1213 02:44:38.584560       1 utils.go:109] GRPC request: {"name":"snapshot-a56cdc8b-a0e2-4f46-b3a0-916ffed068eb","source_volume_id":"nfs-server.default.svc.cluster.local##pvc-9e6bdd5a-c6c0-466a-b0b5-9b6cbf84a0ac##"}
[pod/csi-test-controller-9784fb589-nprbv/nfs] I1213 02:44:38.618522       1 nodeserver.go:144] volume(nfs-server.default.svc.cluster.local##pvc-9e6bdd5a-c6c0-466a-b0b5-9b6cbf84a0ac##) mount nfs-server.default.svc.cluster.local:/ on /tmp/pvc-9e6bdd5a-c6c0-466a-b0b5-9b6cbf84a0ac succeeded
[pod/csi-test-controller-9784fb589-nprbv/nfs] I1213 02:44:38.618649       1 controllerserver.go:357] archiving /tmp/pvc-9e6bdd5a-c6c0-466a-b0b5-9b6cbf84a0ac/pvc-9e6bdd5a-c6c0-466a-b0b5-9b6cbf84a0ac -> /tmp/snapshot-a56cdc8b-a0e2-4f46-b3a0-916ffed068eb/snapshot-a56cdc8b-a0e2-4f46-b3a0-916ffed068eb/pvc-9e6bdd5a-c6c0-466a-b0b5-9b6cbf84a0ac.tar.gz
[pod/csi-test-controller-9784fb589-nprbv/nfs] I1213 02:44:38.622663       1 controllerserver.go:362] archived /tmp/pvc-9e6bdd5a-c6c0-466a-b0b5-9b6cbf84a0ac/pvc-9e6bdd5a-c6c0-466a-b0b5-9b6cbf84a0ac -> /tmp/snapshot-a56cdc8b-a0e2-4f46-b3a0-916ffed068eb/snapshot-a56cdc8b-a0e2-4f46-b3a0-916ffed068eb/pvc-9e6bdd5a-c6c0-466a-b0b5-9b6cbf84a0ac.tar.gz

@smuda
Copy link
Author

smuda commented Dec 14, 2023

Only as long as the cluster is healthy :-)

So the PVC we create have a nice, understandable name which, especially in the context of a namespace, makes sense:

NAME                         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS
alert-pg                     Bound    pvc-cc3f3d08-f0ef-4769-a9c7-bf0901b108eb   5Gi        RWO            nfs-csi       
...

But the automatically created PVs does not get understandable names:

NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                             STORAGECLASS   REASON 
pvc-cc3f3d08-f0ef-4769-a9c7-bf0901b108eb   5Gi        RWO            Delete           Bound    bd/alert-pg                                       nfs-csi
...

So even though the PV name ("pvc-cc3f3d08-f0ef-4769-a9c7-bf0901b108eb") is included in the path, it doesn't really tell me anything (not even namespace) when trying to restore the PVs after the cluster has gone "boom".

In my context, I'd like to create the snapshots inside a directory for the namespace with a name taken from the PVC, and of course some kind of uid (or date/time?) to distinguish between the snapshots, for example:
subdir="snapshots/${vs.metadata.namespace}/${vs.spec.source.persistentVolumeClaimName}"

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 13, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 12, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale May 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants