You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am running Beyla as a Daemonset in GKE and want to only instrument processes that are part of a pod:
discovery:
services:
# only gather metrics from workloads running as a pod
- k8s_pod_name: .+skip_go_specific_tracers: trueotel_traces_export:
endpoint: http://otel-collector:4317interval: 30s
discovery:
services:
# only gather metrics from workloads running as a pod
- k8s_pod_name: .+skip_go_specific_tracers: trueotel_traces_export:
endpoint: http://otel-collector:4317interval: 30sattributes:
kubernetes:
enable: true
In my particular case, I'd like to use the OTel collector's k8sattributesprocessor to add this metadata instead (happy to expand more on why).
The text was updated successfully, but these errors were encountered:
Thanks @aabmass, yes this is a current limitation, we'll work on removing it, so that you can enable the kubernetes discovery, but not the annotation. We'd really appreciate if you could let us know why you'd use the Otel collector for annotation instead. Even if it's something we can't fix, we'd like the feedback.
Thanks! We are creating a servicegraph metric in the collector from Beyla spans and I want to resolve the k8s pod name for both server.address and client.address.
One other concern @dashpole brought up is that since we're running Beyla as a Daemonset, doing k8s annotation in Beyla would create a watch in every node which could be expensive for the k8s API server and generate a lot of notifications for irrelevant pods. This is more precautionary, we aren't seeing any issues right now.
I wonder for now if there's a way to drop our k8s annotations at the collector side and then inject theirs.
Yes there a few workarounds, I just wanted to raise the issue if the docs need to be updated.
doing k8s annotation in Beyla would create a watch in every node which could be expensive for the k8s API server and generate a lot of notifications for irrelevant pods. This is more precautionary, we aren't seeing any issues right now.
For "regular" http metrics, this isn't an issue, since in theory you can filter pods being watched for pods that are on the same node as beyla. But for service graph metrics, the client or server address can be for pods on a different node, which means you can't filter the watch used by beyla anymore.
I am running Beyla as a Daemonset in GKE and want to only instrument processes that are part of a pod:
I'm finding this doesn't work unless I also enable the Kubernetes decorator:
In my particular case, I'd like to use the OTel collector's
k8sattributesprocessor
to add this metadata instead (happy to expand more on why).The text was updated successfully, but these errors were encountered: