Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TracingPolicies do not get applied in WSL2 #2338

Open
joshuajorel opened this issue Apr 16, 2024 · 3 comments
Open

TracingPolicies do not get applied in WSL2 #2338

joshuajorel opened this issue Apr 16, 2024 · 3 comments
Labels
kind/bug Something isn't working

Comments

@joshuajorel
Copy link
Contributor

What happened?

I am building the main branch from my local WSL2 environment and TracingPolicy and TracingPolicyNamespaced do not get applied. I am running the following example:

apiVersion: cilium.io/v1alpha1
kind: TracingPolicyNamespaced
metadata:
  name: "fd-install"
spec:
  kprobes:
  - call: "fd_install"
    syscall: false
    args:
    - index: 0
      type: "int"
    - index: 1
      type: "file"
    selectors:
    - matchArgs:
      - index: 1
        operator: "Equal"
        values:
        - "/tmp/tetragon"
      matchActions:
      - action: Sigkill

Tetragon Version

Tetragon build from 1dee96d7d58b7ccc57e955eb71b4c1e72f87293d

Kernel Version

Linux DESKTOP-KI004JQ 5.15.146.1-microsoft-standard-WSL2 #1 SMP Thu Jan 11 04:09:03 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

Kubernetes Version

Client Version: v1.29.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.2

Bugtool

tetragon-bugtool.tar.gz

Relevant log output

No response

Anything else?

Only process_events get captured and no other event gets captured.

@kkourt
Copy link
Contributor

kkourt commented Apr 16, 2024

Thanks! Can you please provide a sysdump or the tetragon pod logs?

For the sysdump, please see https://tetragon.io/docs/troubleshooting/#automatic-log--state-collection.

@joshuajorel
Copy link
Contributor Author

cilium-sysdump-20240417-161018.zip

@kkourt attached the sysdump

@kkourt
Copy link
Contributor

kkourt commented Apr 17, 2024

Not clear to me what exactly the issue is, I'll add some speculation and notes for future reference

We dont' seem to have a proper /procRoot:

2024-04-17T06:46:13.367309227Z time="2024-04-17T06:46:13Z" level=warning msg="Tetragon pid file creation failed" error="readlink /procRoot/self: no such file or directory" pid=0

But at least in terms of metrics, everything seems fine:

tetragon_policyfilter_metrics_total{error="",op="add",subsys="pod-handlers"} 15
tetragon_policyfilter_metrics_total{error="",op="add",subsys="rthooks"} 0
tetragon_policyfilter_metrics_total{error="",op="add-container",subsys="pod-handlers"} 0
tetragon_policyfilter_metrics_total{error="",op="add-container",subsys="rthooks"} 0
tetragon_policyfilter_metrics_total{error="",op="delete",subsys="pod-handlers"} 0
tetragon_policyfilter_metrics_total{error="",op="delete",subsys="rthooks"} 0
tetragon_policyfilter_metrics_total{error="",op="update",subsys="pod-handlers"} 171
tetragon_policyfilter_metrics_total{error="",op="update",subsys="rthooks"} 0
tetragon_policyfilter_metrics_total{error="generic-error",op="add",subsys="pod-handlers"} 0
tetragon_policyfilter_metrics_total{error="generic-error",op="add",subsys="rthooks"} 0
tetragon_policyfilter_metrics_total{error="generic-error",op="add-container",subsys="pod-handlers"} 0
tetragon_policyfilter_metrics_total{error="generic-error",op="add-container",subsys="rthooks"} 0
tetragon_policyfilter_metrics_total{error="generic-error",op="delete",subsys="pod-handlers"} 0
tetragon_policyfilter_metrics_total{error="generic-error",op="delete",subsys="rthooks"} 0
tetragon_policyfilter_metrics_total{error="generic-error",op="update",subsys="pod-handlers"} 0
tetragon_policyfilter_metrics_total{error="generic-error",op="update",subsys="rthooks"} 0
tetragon_policyfilter_metrics_total{error="pod-namespace-conflict",op="add",subsys="pod-handlers"} 0
tetragon_policyfilter_metrics_total{error="pod-namespace-conflict",op="add",subsys="rthooks"} 0
tetragon_policyfilter_metrics_total{error="pod-namespace-conflict",op="add-container",subsys="pod-handlers"} 0
tetragon_policyfilter_metrics_total{error="pod-namespace-conflict",op="add-container",subsys="rthooks"} 0
tetragon_policyfilter_metrics_total{error="pod-namespace-conflict",op="delete",subsys="pod-handlers"} 0
tetragon_policyfilter_metrics_total{error="pod-namespace-conflict",op="delete",subsys="rthooks"} 0
tetragon_policyfilter_metrics_total{error="pod-namespace-conflict",op="update",subsys="pod-handlers"} 0
tetragon_policyfilter_metrics_total{error="pod-namespace-conflict",op="update",subsys="rthooks"} 0

And we do have some entries in the policyfilter map:

$ cat policy_filter_maps.json 
{"1":{"8253":{},"8283":{},"8313":{},"8343":{}}}                                             

But maybe they are namespaced and do not correspond to the real cgroups in the kernel.

We also seem to have some exit events, but definitely less than exec:

tetragon_msg_op_total{msg_op="23"} 92 /* CLONE */
tetragon_msg_op_total{msg_op="24"} 5  /* DATA */
tetragon_msg_op_total{msg_op="5"} 4460 /* EXEC */
tetragon_msg_op_total{msg_op="7"} 72 /* EXIT */

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants