Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change in values.yaml v0.0.30 to v0.0.31 - removes ability to define end user params via ConfigMap #104

Open
peterbroadhurst opened this issue Apr 6, 2023 · 9 comments

Comments

@peterbroadhurst
Copy link

peterbroadhurst commented Apr 6, 2023

Due to PR #94 the use of a configMapGenerator with a YAML file in a Kustomize, is generating an invalid config map.

Error: YAML parse error on kaleido-helm/templates/operator.yaml: error converting YAML to JSON: yaml: line 12: mapping values are not allowed in this context

I get YAML as so, where you can the line with indent 1, which does not work with a complex set of values

apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ include "myname.fullname" . }}-operator
  labels:
  {{- include "myname.labels" . | nindent 4 }}
data:
  config.yaml: {{ .Values.myname.configYaml | toYaml | indent 1 }}

You can see the same issue in the example provided in the PR:
https://github.com/arttor/helmify/blob/main/examples/operator/templates/manager-config.yaml

Here's an example of what a helm template --debug shows me, showing how the YAML is incorrectly inserted:

data:
  config.yaml:  images:
   myImage: myreg.jfrog.io/myreg:mytag
@peterbroadhurst
Copy link
Author

Hmmn, well actually this is more significant a change than I thought.

We had been relying on a Kustomize as per the below, in order to create a nested set of values that could be supplied by a user installing the chart.

In 0.0.30 and below, this worked nicely by generating a section in the values.yaml with multiple nested values that could be set (as long as we didn't use the friendlyname=config.yaml syntax, and accepted the odd configYaml name in the values).

With #94, it seems in order to allow multi-line individual sub-values, the function we were relying on has been removed.

configMapGenerator:
- files:
  - config.yaml
  name: myname

@peterbroadhurst
Copy link
Author

It will take a bit of work to try and form a PR proposal that meets both requirements 🤔

@arttor
Copy link
Owner

arttor commented Apr 6, 2023

Hi, thank you for raising the issue. Just want to confirm that old version was handling yaml data in configMap incorrectly because k8s expects string key-value pairs in data.
I get the correct template from the example in the latest version:

# Source: operator/templates/manager-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: release-name-operator-manager-config
  labels:
    helm.sh/chart: operator-0.1.0
    app.kubernetes.io/name: operator
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "0.1.0"
    app.kubernetes.io/managed-by: Helm
data:
  controller_manager_config.yaml:  |-
   apiVersion: controller-runtime.sigs.k8s.io/v1alpha1
   kind: ControllerManagerConfig
   health:
     healthProbeBindAddress: :8081
   metrics:
     bindAddress: 127.0.0.1:8080
   webhook:
     port: 9443
   leaderElection:
     leaderElect: true
     resourceName: 3a2e09e9.example.com
   rook:
     namespace: rook-ceph
     toolboxPodLabel: rook-ceph-tools
  dummyconfigmapkey: "dummyconfigmapvalue"

@peterbroadhurst
Copy link
Author

That is indeed the change in behavior we've seen.

Before we were able to generate a sample values.yaml that contained a detailed set of individual config settings, which a user installing the operator could set using normal Helm semantics. Just by using a configMapGenerator in a kustomization.yaml.

Now, instead the whole section is converted into a single string field, where all of those fields have to be constructed into a YAML payload. Not a very friendly or professional user experience for someone consuming the operator.

The best illustration is in the samples in the code.

In v0.0.30 the user experience installing an operator with a sophisticated YAML config could be as so:

controllerManagerConfigYaml:
health:
healthProbeBindAddress: :8081
leaderElection:
leaderElect: true
resourceName: 3a2e09e9.example.com
metrics:
bindAddress: 127.0.0.1:8080
rook:
namespace: rook-ceph
toolboxPodLabel: rook-ceph-tools
webhook:
port: 9443

Now the same experience becomes:

managerConfig:
controllerManagerConfigYaml: |-
apiVersion: controller-runtime.sigs.k8s.io/v1alpha1
kind: ControllerManagerConfig
health:
healthProbeBindAddress: :8081
metrics:
bindAddress: 127.0.0.1:8080
webhook:
port: 9443
leaderElection:
leaderElect: true
resourceName: 3a2e09e9.example.com
rook:
namespace: rook-ceph
toolboxPodLabel: rook-ceph-tools

We've had to just pin the version back for now, as I don't have the cycles immediately to make a proposal to re-instate the very helpful function that was removed in v0.0.31 and later.

It's possible there's another way of doing what I was trying to achieve.

@peterbroadhurst
Copy link
Author

I note one previous approach to generating such end-user config we'd found was to bind an environment variable with a valueFrom pointing at a ConfigMap resource. helmfiy generated a unique Helm variable for each of those, but that meant we had to define all the config for the operator runtime as environment variables - which is much less desirable from a code complexity perspective than having a single YAML config mounted directly from the configmap.

@peterbroadhurst peterbroadhurst changed the title Regression in 0.3.31 - invalid YAML generated Change in values.yaml v0.0.30 to v0.0.31 - removes ability to define end user params via ConfigMap Apr 7, 2023
@arttor
Copy link
Owner

arttor commented Apr 7, 2023

I absolutely agree that separate values for yaml were more user-friendly but resulted ConfigMap was incorrect because k8s expects string value according to spec.

@arttor
Copy link
Owner

arttor commented Apr 7, 2023

please describe how it should look in a helm chart keeping in mind that it should be represented as a string in resulted k8s manifest.

@onelapahead
Copy link
Contributor

Hello @arttor. I believe this regression was unnecessary because helmify was never generating an invalid ConfigMap as you suggested:

I absolutely agree that separate values for yaml were more user-friendly but resulted ConfigMap was incorrect because k8s expects string value according to spec.

I looked into the original issue (#30) which motivated the fixes in #94. I've created a branch on my fork based on v0.3.30 where I copied over the sample app output, re-generated the example app chart, and then copied over the E2E tests that were added to the CI workflow in later releases which validate the resulting helm template creates correct K8s resources: https://github.com/arttor/helmify/compare/v0.3.30...hfuss:helmify:0.3.30-backport?expand=1

Running the helm template manually I get:

# Source: app/templates/my-config-props.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: release-name-app-my-config-props
  labels:
    helm.sh/chart: app-0.1.0
    app.kubernetes.io/name: app
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "0.1.0"
    app.kubernetes.io/managed-by: Helm
data:
  my.prop1: "1"
  my.prop2: "val 1"
  my.prop3: "true"
  my.services: 
    - docker
    - helm
    - k8s
  myval.yaml: |
    apiVersion: clickhouse.altinity.com/v1
    kind: ClickHouseInstallationTemplate
    metadata:
      name: "default-oneperhost-pod-template"
    spec:
      templates:
        podTemplates: 
          - distribution: OnePerHost
            name: default-oneperhost-pod-template

And the GitHub Action run confirms it was always valid: https://github.com/hfuss/helmify/actions/runs/4638714352/jobs/8208790960

So now we're in a tough situation. Folks using v0.3.31+ may have come to rely on the new resulting string values, just like we relied on the resulting YAML values in v0.3.30.

We could either:

  1. Regress the regression and make another breaking change.
  2. Offer a CLI flag to indicate what behavior to use for multiline configs, and have the code support either.

@arttor
Copy link
Owner

arttor commented Apr 7, 2023

Understood. Can you please elaborate on the first option? What change do we need to introduce or you are speaking about your project?

About the second option: having an extra flag for backward compatibility is ok but it should not work exactly like before because ConfigMap was invalid.
We can make an exception only for kubebuilder. Check if property name is controller_manager_config.yaml or parse underlying yaml and check kind: ControllerManagerConfig. But operator developers can break it by introducing other kinds of configs or using different property names.
Alternatively, we can just check if property is in yaml format and move everything to values.yaml in yaml format. But in this case we will have apiVersion, kind, etc... in values.yaml.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants