Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

#18 more configurable server config file #19

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

TomHellier
Copy link
Contributor

the server validates the configuration file, and if it sees values it does not accept, the process exits.

This change will allow users to force overwrite the values which exist in the helm chart to stop the helm maps merging

Should address #18

the server validates the configuration file, and if it sees values it does not accept, the process exits.

This change will allow users to force overwrite the values which exist in the helm chart to stop the helm maps merging
@TomHellier
Copy link
Contributor Author

@rlex would you mind taking a look?

@rlex
Copy link
Collaborator

rlex commented Sep 16, 2022

I actually thought about it just couple of days ago.
Since config is yaml anyway, we can use something like grafana charts does: https://github.com/grafana/helm-charts/pull/1591/files.
So we can construct config.yaml from other variables, and then just pass structuredConfig to merge two blocks, allowing us to override anything in config.

@TomHellier
Copy link
Contributor Author

TomHellier commented Sep 16, 2022

I think the structured configuration wouldn't solve the issue I am facing.

I want to basically void out some sections of the parca.yaml, they are incredibly strict with what is put into that file.

parca from master expects:

object_storage:
  bucket:
    type: "FILESYSTEM"
    config:
      directory: "/tmp/data"
scrape_configs: []

and if we allow the merging of the values from the base values.yaml without the ability to override it, then it looks like this:

object_storage:
  bucket:
    type: "FILESYSTEM"
    config:
      directory: "/tmp/data"
scrape_configs: 
    - job_name: 'kubernetes-pods'
      scrape_interval: 1m
      scrape_timeout: 10s
      kubernetes_sd_configs:
        - role: pod
      relabel_configs:
        - source_labels: [__meta_kubernetes_pod_annotation_parca_dev_scrape]
          action: keep
          regex: true
        - source_labels: [__meta_kubernetes_pod_annotation_parca_dev_path]
          action: replace
          target_label: __metrics_path__
          regex: (.+)
        - source_labels: [__address__, __meta_kubernetes_pod_annotation_parca_dev_port]
          action: replace
          regex: ([^:]+)(?::\d+)?;(\d+)
          replacement: $1:$2
          target_label: __address__
        - action: labelmap
          regex: __meta_kubernetes_pod_label_(.+)
        - source_labels: [__meta_kubernetes_namespace]
          action: replace
          target_label: kubernetes_namespace
        - source_labels: [__meta_kubernetes_pod_name]
          action: replace
          target_label: kubernetes_pod_name
debug_info:
  bucket:
    config:
      directory: "./tmp"
    type: FILESYSTEM
  cache:
    config:
      directory: "./tmp"
    type: FILESYSTEM

and the parca process errors because it isn't expecting to see debug_info and cache in that yaml file.

This issue is describing the helm problem I am seeing:
helm/helm#5184

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants