New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Resource consuption are not limited or OOM killed #10371
Comments
From the logs you shared it looks like you/containerd are specifying a systemd cgroup path (format |
Yes, it's working, thank you |
@manninglucas Can we autodetect whether or not systemd-based cgroup control should be enabled? |
@EtiennePerot Maybe, but I think we should always try to stay in line with what runc does. Runc doesn't attempt to auto-detect systemd based configuration, it just reads whatever the user sets for the [1] https://github.com/opencontainers/runc/blob/e8bec1ba40039a004d57ddc0a9afec9a8364172b/docs/systemd.md |
Fair enough, but perhaps also a warning log message in the runsc logs if this is detected? |
Description
I want to sandbox a pod with gvisor and limit resources consuption (cpu and memory).
I am using containerd as container runtime.
I notice that pod consumes more memory and cpu than it should. I tried many configs but it seems that gVisor is not able to manage that yet.
Steps to reproduce
Configuration
Runsc
File
/etc/containerd/runsc.toml
Containerd
File
/etc/containerd/config.toml
Execute
Kubernetes resources
I am using
stress-ng
to request 2048Mi and 9vCpu.I am setting container resources limit to 1024Mi and 1 vCpu
Get pod and containers ID/UID
Inspect Pod and Container
crictl stats $CONTAINER_ID
List Logs
Get cgroup informations
Check memory
Results
All logs are available here: https://github.com/cedricmoulard/gvisor-ressources-issue
Pod on cluster
I expect pod to be OOM killed or to use less than 1Gi and 1vCpu
Cgroups
cat $CGROUP_EXPORT_FILE
runsc version
docker version (if using docker)
No response
uname
Linux k8s-test-gvisor-kosmos-node01 5.15.0-102-generic #112-Ubuntu SMP Tue Mar 5 16:50:32 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
kubectl (if using Kubernetes)
No response
repo state (if built from source)
No response
runsc debug logs (if available)
The text was updated successfully, but these errors were encountered: