New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bitnami/influxdb] Allow separate persistence cfg for backups #24586
base: main
Are you sure you want to change the base?
[bitnami/influxdb] Allow separate persistence cfg for backups #24586
Conversation
e10dcaf
to
f55bf93
Compare
f55bf93
to
b627d37
Compare
Hi, thanks for your contribution. Could you please grant access for bitnami-bot into your fork? It is needed for some automation to automatically update the README and CRDs if needed:
|
b627d37
to
c75d090
Compare
@carrodher I've sent the invite with Write access to bitnami-bot. |
@andresbono Any word on whether or not we can merge this? I don't want it to get closed due to inactivity like last time 😄 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your contribution! Could you please check my comments?
a3b09b2
to
e07ab8b
Compare
Coalesce seems to do the trick, I've refactored the PR to clean up the mess I made in I ran through a few scenarios to evaluate the output of the chart: # Should create no PVCs, use the same volume claim for both (still not sure if this makes sense)
helm template test . -f values.yaml --set backup.enabled=true --set backup.persistence.enabled=false --set persistence.existingClaim=foo
# Should create one PVC for the backup
helm template test . -f values.yaml --set backup.enabled=true --set backup.persistence.enabled=true --set persistence.existingClaim=foo
# Should create one PVC for the database
helm template test . -f values.yaml --set backup.enabled=true --set backup.persistence.enabled=true --set backup.persistence.existingClaim=foo --set persistence.size=16Gi
# Should create one PVC for the backup with size=16Gi
helm template test . -f values.yaml --set backup.enabled=true --set backup.persistence.enabled=true --set persistence.enabled=true --set persistence.existingClaim=foo --set backup.persistence.size=16Gi
# Should create no PVCs, reference existing claims for both
helm template test . -f values.yaml --set backup.enabled=true --set backup.persistence.enabled=true --set persistence.existingClaim=foo --set backup.persistence.existingClaim=bar
# Should only use foo and ignore bar (since backup.persistence.enabled=false)
helm template test . -f values.yaml --set backup.enabled=true --set backup.persistence.enabled=false --set persistence.existingClaim=foo --set backup.persistence.existingClaim=bar I am still worried that the default behaviour for many (most?) consumers using this chart would be to have the database's PVC set to ReadWriteOnce mode, which would prevent the CronJob from ever launching. Ultimately, if the Bitnami team thinks its okay to swap the CronJob to using a separate persistent volume as a default for backups, I can change the default value of |
0448d6d
to
c604d8b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great now! Thank you so much. Could you check my suggestions and run your helm template
scenarios again?
The main point of my suggestions is to introduce a new different parameter backup.persistence.ownConfig
to have control on what set of persistence parameters should be applied. I consider it is important to have a different one than backup.persistence.enabled
to cover the case in which you want persistence for the database but not for the backup for example.
As we are dealing with application persisted data here, we should be extra safe :)
bitnami/influxdb/Chart.yaml
Outdated
- name: common | ||
repository: oci://registry-1.docker.io/bitnamicharts | ||
tags: | ||
- bitnami-common | ||
version: 2.x.x |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- name: common | |
repository: oci://registry-1.docker.io/bitnamicharts | |
tags: | |
- bitnami-common | |
version: 2.x.x | |
- name: common | |
repository: oci://registry-1.docker.io/bitnamicharts | |
tags: | |
- bitnami-common | |
version: 2.x.x |
bitnami/influxdb/Chart.yaml
Outdated
- influxdb | ||
- tick | ||
- database | ||
- timeseries |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- influxdb | |
- tick | |
- database | |
- timeseries | |
- influxdb | |
- tick | |
- database | |
- timeseries |
bitnami/influxdb/Chart.yaml
Outdated
- name: VMware, Inc. | ||
url: https://github.com/bitnami/charts |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- name: VMware, Inc. | |
url: https://github.com/bitnami/charts | |
- name: VMware, Inc. | |
url: https://github.com/bitnami/charts |
bitnami/influxdb/Chart.yaml
Outdated
name: influxdb | ||
sources: | ||
- https://github.com/bitnami/charts/tree/main/bitnami/influxdb | ||
version: 6.0.6 | ||
- https://github.com/bitnami/charts/tree/main/bitnami/influxdb |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- https://github.com/bitnami/charts/tree/main/bitnami/influxdb | |
- https://github.com/bitnami/charts/tree/main/bitnami/influxdb |
The YAML is still valid with both levels of indentation but it will be reverted in the next automatic update causing unnecessary commit changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, my YAML formatting plugin has been aggressively asserting itself. I'll change them back
@@ -129,6 +129,8 @@ As an alternative, you can use of the preset configurations for pod affinity, po | |||
The data is persisted by default using PVC(s). You can disable the persistence setting the `persistence.enabled` parameter to `false`. | |||
A default `StorageClass` is needed in the Kubernetes cluster to dynamically provision the volumes. Specify another StorageClass in the `persistence.storageClass` or set `persistence.existingClaim` if you have already existing persistent volumes to use. | |||
|
|||
If you would like to define persistence settings for a backup volume that differ from the persistence settings for the database volume, you may do so under the `backup.persistence` section of the configuration. If this section is undefined, but `backup.enabled` is set to true, the backup volume will be defined using the `persistence` parameter section. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you would like to define persistence settings for a backup volume that differ from the persistence settings for the database volume, you may do so under the `backup.persistence` section of the configuration. If this section is undefined, but `backup.enabled` is set to true, the backup volume will be defined using the `persistence` parameter section. | |
If you would like to define persistence settings for a backup volume that differ from the persistence settings for the database volume, you may do so under the `backup.persistence` section of the configuration by setting `backup.persistence.ownConfig` to `true`. The backup volume will otherwise be defined using the `persistence` parameter section. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to clarify: if someone doesn't set backup.persistence.ownConfig: true
, and they have persistence.existingClaim
set to some volume, should the effective value of backup.persistence.existingClaim
be the same? That would lead to binding of the same target volume, which is the RWO problem I mentioned.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If someone has backup.persistence.ownConfig: false
(which is the proposed default, at least on this first version) the chart will have the very same behavior than the current version and any backup.persistence.*
parameter will be ignored.
Any bugs present in the current version will still be there unless you explicitly set backup.persistence.ownConfig: true
. Then, you have all the flexibility you want, right?
{{- | ||
/* | ||
Prefer .Values.backup.persistence, but fall back to .Values.persistence if not present. | ||
*/ | ||
-}} | ||
{{- if .Values.backup.enabled }} | ||
{{ $persistence := coalesce .Values.backup.persistence .Values.persistence }} | ||
{{ if and $persistence.enabled (not $persistence.existingClaim) }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
{{- | |
/* | |
Prefer .Values.backup.persistence, but fall back to .Values.persistence if not present. | |
*/ | |
-}} | |
{{- if .Values.backup.enabled }} | |
{{ $persistence := coalesce .Values.backup.persistence .Values.persistence }} | |
{{ if and $persistence.enabled (not $persistence.existingClaim) }} | |
{{- $persistence := ternary .Values.backup.persistence .Values.persistence .Values.backup.persistence.ownConfig }} | |
{{- if and .Values.backup.enabled $persistence.enabled (not $persistence.existingClaim) }} |
name: {{ include "common.names.fullname" $ }}-backups | ||
namespace: {{ include "common.names.namespace" $ | quote }} | ||
labels: {{- include "common.labels.standard" ( dict "customLabels" $.Values.commonLabels "context" $ ) | nindent 4 }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would keep the original contexts
name: {{ include "common.names.fullname" $ }}-backups | |
namespace: {{ include "common.names.namespace" $ | quote }} | |
labels: {{- include "common.labels.standard" ( dict "customLabels" $.Values.commonLabels "context" $ ) | nindent 4 }} | |
name: {{ include "common.names.fullname" . }}-backups | |
namespace: {{ include "common.names.namespace" . | quote }} | |
labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }} |
{{- if or $persistence.annotations $.Values.commonAnnotations }} | ||
{{- $annotations := include "common.tplvalues.merge" ( dict "values" ( list $persistence.annotations $.Values.commonAnnotations ) "context" $ ) }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
{{- if or $persistence.annotations $.Values.commonAnnotations }} | |
{{- $annotations := include "common.tplvalues.merge" ( dict "values" ( list $persistence.annotations $.Values.commonAnnotations ) "context" $ ) }} | |
{{- if or $persistence.annotations .Values.commonAnnotations }} | |
{{- $annotations := include "common.tplvalues.merge" ( dict "values" ( list $persistence.annotations .Values.commonAnnotations ) "context" . ) }} |
persistence: | ||
## @param backup.persistence.enabled Enable data persistence for backup volume | ||
## | ||
enabled: false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
enabled: false | |
enabled: true |
|
||
## Persistence parameters | ||
## | ||
persistence: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
persistence: | |
persistence: | |
## @param backup.persistence.ownConfig Prefer independent own persistence parameters to configure the backup volume | |
## When set to `false` (for backwards compatibility), the rest of the persistence parameters below will be ignored. | |
## This parameter will be set to `true` and removed in a future release. | |
## | |
ownConfig: false |
I'm a bit confused; if someone wants persistence for the database and not the backup, they would set
EDIT: Scratch that, I think I get what you're going for here. This is mostly to avoid duplicating the default arguments twice if someone just wants two volumes that fit an identical set of values (or the default). I know that with Helm charts, you want sane defaults that leave the deployment in a stable, working state that's usable, but that's sort of what I'm hung up on. I do agree that something like this would help prevent scenarios like upgrading an installation that uses a 1 Ti volume with a separate backup volume set to 8Gi (and similar issues), but it would also create issues for anyone using As far as I can tell, avoiding either one of these pitfalls requires providing unique configuration for the backup volume in the values.yaml used for the chart on deployment (either providing I think having the backups as a separate volume with separate, non-identical configuration is a more reasonable default that reflects the realistic expectation that multiple copies of the target database may end up needing more than that database volume's storage capacity. If someone wants to use the same volume for both on a new deployment, they can provide identical targets for Regardless, I think this makes a strong argument for a major version bump of the chart, and a step in an upgrade guide to avoid whichever pitfall is created by the approach we settle on. Side-note: I do think that if we were to keep distinct values for the |
93d634f
to
5b7b7b5
Compare
I feel like I still don't fully understand the concern here. What if you give me an example of the
I would do the change incrementally. We can go ahead with the PR and Does it make sense to you? |
@andresbono Sorry, I've been really busy all of this week with work, but I haven't forgotten about this PR. My concerns about I was mostly worried because I've previously seen behaviour in which certain drivers treat When I was testing this out myself (EKS v1.23 with aws-ebs-csi for storage), the cluster was throwing VolumeAttachment errors. I suspect it wasn't scheduling the CronJob on the same node as the Deployment, but that's an issue with something outside of the scope of this chart. |
89571cd
to
29062a5
Compare
No problem at all! We totally understand. |
This Pull Request has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thank you for your contribution. |
Add an optional configuration section under backup that allows the specification of distinct values for persistence. If this configuration section is not provided, the template will default to rendering using the existing top-level persistence configuration section to maintain compatibility. Signed-off-by: Kevin Delisle <kjdelisle@gmail.com>
88b2783
to
b05670b
Compare
Signed-off-by: Bitnami Containers <bitnami-bot@vmware.com>
Hi @kevin-delisle-peakpower, if you prefer, you can grant me access to your fork and I can help pushing the proposed suggestions. Let me know if you prefer that. Thank you! |
Honestly, I would love that. I know this has been taking too long and I've been swamped with a bunch of OIDC and AWS IAM refactoring work. I've granted you write access to the repo. Thank you 🙏 |
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Pull Request. Do not hesitate to reopen it later if necessary. |
I'm back! I missed the approval of the PR last time (was bogged down in other work and it slipped my mind), so I've rebased it, and tidied it up a bit.
Description of the changes
Benefits
Users can now define volumes for the Deployment and their backups separately without needing Kustomize or manual post-deployment hacks.
Example usage could include:
Possible drawbacks
There should not be any drawbacks to this PR; I have attempted to maintain backwards compatibility with existing use cases in the event that someone provides a values.yaml file that does not include the new section.
Additional information
I've dumped the template out using permutations of both persistence modes to confirm that the
backup.persistence
section will take precedence overpersistence
for the backup volume, as well as to confirm that the templates still correctly render in the complete absence of that configuration section.Checklist
Chart.yaml
according to semver. This is not necessary when the changes only affect README.md files.README.md
using readme-generator-for-helm