New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OOMKilled as number of constraints grew #3357
Comments
There are a number of possible causes for increased RAM usage. What is your constraints and constraint template count? (you mention 87 for constraints, but not clear if/how that relates to # of constraint templates) Has Gatekeeper been configured to cap the number of concurrent inbound requests? (if not, then each concurrent request may require more RAM) Are you using referential data? If so has the data set grown? (referential data is cached in RAM) To get a rough sense of whether RAM usage is due to at-rest conditions vs. satisfying serving requirements, I suggest disabling calling of validating/mutating webhooks temporarily to see what the RAM usage settles on. If at-rest usage is still high, likely culprit is referential data. If request volume is high, then I'd suspect either uncapped parallelism or poorly-optimized Rego in a constraint template. Another option might be to compare webhook memory usage to audit pod memory usage. If Audit looks healthier, likely it's due to QPS. The error you are highlighting is consistent with golang context being closed due to OOMKill (I.e. OOMKill causes the error, not the other way around). |
The constraint template count is the same as constraints. We are using referential data and removing pods from it made things better for both audit and the webhook; they were both consuming high memory. I also noticed that it ended up needing more resources in a smaller multi-tenant cluster than another larger cluster so I'd still like to cap the number of requests, how can I do that? Also despite it not being OOMKilled anymore I am continuing to see the error I mentioned above, even though the webhook continues to work as expected end to end. |
"context canceled" can mean the caller's context was canceled (usually due to request timeout). I think it usually says something different than "serving context canceled", though maybe the framework changed its error text. In any case, gatekeeper/pkg/webhook/policy.go Line 71 in 80d677a
Tuning that value and/or raising CPU or # of serving webhook pods may help with timeouts (assuming that's what the context canceled errors are) |
What steps did you take and what happened:
Our controller manager replicas are using a lot of memory as our number of constraints grew. We want to know if these resource limits are expected.
What did you expect to happen:
Lower resource limits.
Anything else you would like to add:
Seeing these error logs:
Environment:
kubectl version
): 1.27.3 (AKS)The text was updated successfully, but these errors were encountered: