New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Resource consumption by python are not limited #10264
Comments
Hi; I can't seem to reproduce this, at least on GKE. gVisor doesn't by itself do memory limiting; instead, it relies on the host Linux kernel to do this. It is set up here as part of container startup which eventually ends up here to control memory. This way, it limits both the total memory usage of the sum of the gVisor kernel and the processes within it with a single limit. If that goes over the limit, this should be killed by the Linux OOM killer, and this should be visible in The enforcement mechanism depends on many moving parts, so I suggest checking all of them.
If all of this is in place, please provide runsc debug logs, details on how you installed gVisor within the Kubernetes cluster ( Also please check #10371 which was filed recently after this issue and looks quite similar. |
@EtiennePerot Thanks for replying! We have found the problem thanks to @charlie0129. It ends up with that we didn't configure gvisor to use systemd-cgroup which is our cgroup manager in the cluster. After add But I don't seem to find any related document/FAQs about cgroup manager. Forgive me if I miss it. And if there is truly not any of them. It would be kind to mention it somewhere in document. |
See discussion on #10371 on this. Apparently |
Description
I'm building a sandbox service with gVisor. But the python seems to be able to apply unlimited memory while a bash script trying to apply unlimited memory are marked Error in Pod status.
Steps to reproduce
I got the result. The memory is ~62GiB because in my pod because I'm trying to investigating why it makes our machine to be OOM. So, my pod apply ~100GiB memory.
runsc version
docker version (if using docker)
No response
uname
Linux 3090-k8s-node029 5.15.0-69-generic #76-Ubuntu SMP Fri Mar 17 17:19:29 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
kubectl (if using Kubernetes)
repo state (if built from source)
No response
runsc debug logs (if available)
Haven't do it in the cluster
The text was updated successfully, but these errors were encountered: