You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am running version-checker on a single node, quite small cluster with ~60 pods. So far it is working nicely, but I do not understand the memory behavior it has.
I'm basically running the sample deployment file, plus the --test-all-containers flag and some cpu limits:
Over time, I see that version-checker approaches the memory limit and then stays near ~99% for a while. After some time, the kernel kills the ct due to OOM and k8s restarts the pod.
However, I do not see anything alarming in the logs, other than some failures and expected permission errors.
This doesn't seem to have any functional impact, but does fire some alerts and doesn't look good on my dashboards :)
Is this behavior intended, and/or is there any way to prevent it?
The text was updated successfully, but these errors were encountered:
I am running
version-checker
on a single node, quite small cluster with ~60 pods. So far it is working nicely, but I do not understand the memory behavior it has.I'm basically running the sample deployment file, plus the
--test-all-containers
flag and some cpu limits:kubectl get pod -o yaml
Over time, I see that
version-checker
approaches the memory limit and then stays near ~99% for a while. After some time, the kernel kills the ct due to OOM and k8s restarts the pod.However, I do not see anything alarming in the logs, other than some failures and expected permission errors.
This doesn't seem to have any functional impact, but does fire some alerts and doesn't look good on my dashboards :)
Is this behavior intended, and/or is there any way to prevent it?
The text was updated successfully, but these errors were encountered: