Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm chart for CRDs and snapshot controller? #551

Closed
gman0 opened this issue Jul 12, 2021 · 18 comments · May be fixed by #622
Closed

Helm chart for CRDs and snapshot controller? #551

gman0 opened this issue Jul 12, 2021 · 18 comments · May be fixed by #622
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@gman0
Copy link

gman0 commented Jul 12, 2021

Are there any plans to offer snapshot CRDs, controller, etc. as a single Helm chart? If not, would you accept contributions in this area?

I realize that all these currently external components (CRDs, controller) will probably be incorporated in Kubernetes some time in the future, so maybe it doesn't make sense having to maintain a Helm chart for these.

@xing-yang
Copy link
Collaborator

@gman0 Thanks for your willingness to contribute! We are discussing about this.

@WanzenBug
Copy link
Contributor

Hey @gman0

FYI: I recently worked on exactly this: https://github.com/piraeusdatastore/helm-charts/

My main motivation was helping users of Piraeus to enable snapshotting on their cluster, but I'd be happy if there was a "official" solution.

@gman0
Copy link
Author

gman0 commented Jul 15, 2021

@WanzenBug thank you! That is exactly my motivation as well, but I just started working on this. From the first glance, your solution looks great, it even seems to offer deployment for the webhook. Would you like to take over this ticket? Of course that is if sig-storage folks think this indeed makes sense.

@Xyaren
Copy link

Xyaren commented Jul 22, 2021

I would also really appreciate there being an offical helm chart, as it makes updating and installing so much easier.

@kfox1111
Copy link

kfox1111 commented Aug 5, 2021

Using helm to manage it would be preferable for us. It would make it easy to customize for our environment and keep it up to date.

@xing-yang
Copy link
Collaborator

We have yaml files that are used by CI. If we have helm charts, can the charts be used to generate yaml files so that they don't go out of sync?

@kfox1111
Copy link

kfox1111 commented Aug 5, 2021

yeah. You can run something like: 'helm template external-snapshotter > static.yaml' to generate them.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 3, 2021
@Xyaren
Copy link

Xyaren commented Nov 3, 2021

/remove-lifecycle stale

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 1, 2022
@Xyaren
Copy link

Xyaren commented Feb 2, 2022

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 2, 2022
@dashjay
Copy link

dashjay commented Feb 25, 2022

I use helm chart 2.1.2 then I found that: old version of deployment can create crds(Related to volumesnapshot ) by them themselves.
Recently, our storage has been depleted, so we want to set up a new ceph cluster.

I found following questions:

  • new csi-snapshotter will not apply a crd into cluster,
  • and not any crds in helm chart.
  • I found that new crds remove the support for v1alpha1.

Because crds of snapshotter not change so much, and there are not many new features, I think we can provide a conversion webhook help users smoothly deprecate versions.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 26, 2022
@nariman-maad
Copy link

Yes, helm is more convenient for automation.

@trallnag
Copy link

trallnag commented Jun 3, 2022

I agree that a Helm chart would be more convenient. Although I also see that maintaining a proper chart requires more resources. The charts provided by @WanzenBug work great btw.

resource "helm_release" "webhook" {
  name       = "snapshot-validation-webhook"
  chart      = "snapshot-validation-webhook"
  repository = "https://piraeus.io/helm-charts/"
  version    = var.webhook_chart_version

  namespace        = local.namespace

  values = [yamlencode({
    nodeSelector = var.node_selector
  })]

  depends_on = [
    kubectl_manifest.crds,
  ]
}

resource "helm_release" "controller" {
  name       = "snapshot-controller"
  chart      = "snapshot-controller"
  repository = "https://piraeus.io/helm-charts/"
  version    = var.controller_chart_version

  namespace        = local.namespace

  values = [yamlencode({
    nodeSelector = var.node_selector
  })]

  depends_on = [
    kubectl_manifest.crds,
    helm_release.webhook,
  ]
}

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 3, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
10 participants