Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes service discovery not working without Kubernetes decoration #673

Open
aabmass opened this issue Mar 6, 2024 · 3 comments
Open
Assignees
Labels
enhancement New feature or request

Comments

@aabmass
Copy link

aabmass commented Mar 6, 2024

I am running Beyla as a Daemonset in GKE and want to only instrument processes that are part of a pod:

discovery:
  services:
  # only gather metrics from workloads running as a pod
  - k8s_pod_name: .+
  skip_go_specific_tracers: true
otel_traces_export:
  endpoint: http://otel-collector:4317
  interval: 30s

I'm finding this doesn't work unless I also enable the Kubernetes decorator:

discovery:
  services:
  # only gather metrics from workloads running as a pod
  - k8s_pod_name: .+
  skip_go_specific_tracers: true
otel_traces_export:
  endpoint: http://otel-collector:4317
  interval: 30s
attributes:
  kubernetes:
    enable: true

In my particular case, I'd like to use the OTel collector's k8sattributesprocessor to add this metadata instead (happy to expand more on why).

@grcevski
Copy link
Contributor

grcevski commented Mar 7, 2024

Thanks @aabmass, yes this is a current limitation, we'll work on removing it, so that you can enable the kubernetes discovery, but not the annotation. We'd really appreciate if you could let us know why you'd use the Otel collector for annotation instead. Even if it's something we can't fix, we'd like the feedback.

I wonder for now if there's a way to drop our k8s annotations at the collector side and then inject theirs. I think this might work:
https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/filterprocessor/README.md

@aabmass
Copy link
Author

aabmass commented Mar 7, 2024

Thanks! We are creating a servicegraph metric in the collector from Beyla spans and I want to resolve the k8s pod name for both server.address and client.address.

One other concern @dashpole brought up is that since we're running Beyla as a Daemonset, doing k8s annotation in Beyla would create a watch in every node which could be expensive for the k8s API server and generate a lot of notifications for irrelevant pods. This is more precautionary, we aren't seeing any issues right now.

I wonder for now if there's a way to drop our k8s annotations at the collector side and then inject theirs.

Yes there a few workarounds, I just wanted to raise the issue if the docs need to be updated.

@dashpole
Copy link
Contributor

dashpole commented Mar 7, 2024

doing k8s annotation in Beyla would create a watch in every node which could be expensive for the k8s API server and generate a lot of notifications for irrelevant pods. This is more precautionary, we aren't seeing any issues right now.

For "regular" http metrics, this isn't an issue, since in theory you can filter pods being watched for pods that are on the same node as beyla. But for service graph metrics, the client or server address can be for pods on a different node, which means you can't filter the watch used by beyla anymore.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants