Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is workingSetBytes of 0 really an indication of a terminated process? #1330

Open
SergeyKanzhelev opened this issue Sep 15, 2023 · 8 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. sig/node Categorizes an issue or PR as relevant to SIG Node. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@SergeyKanzhelev
Copy link
Member

What happened:

We observe sometimes metrics stop flowing for a Pod when one of the containers start reporting 0 as a working_set_bytes.

The container in question is doing nothing but sleep for hours and then check on some files to do some work. We do not have direct access to the repro.

The check that discards Pod metrics in this case is here:

if containerMetric.CumulativeCpuUsed == 0 || containerMetric.MemoryUsage == 0 {
klog.V(1).InfoS("Failed getting complete container metric", "containerName", containerName, "containerMetric", containerMetric)
return nil
} else {

This code was introduced by #759 to filter out situations when the Container is terminated already. I would question the reasoning for throwing the entire Pod stats away when container is terminated. How would it work in case of Jobs with two containers that are not restarteable and one of them already terminated.

I also found similar logic and this comment in heapster repo: kubernetes-retired/heapster#1708 (comment) that says (as expected) that CPU alone is a bad indicator of bad container. Both memory and cpu needs to be checked, while this PR: #759 ignores container if either is zero.

What you expected to happen:

0 workingSetBytes should not result in missing Pod measurements.

Anything else we need to know?:

One questions I still cannot confirm and repro with confidence is whether 0 workingSetBytes is a legitimate situation. I tried to create a small container and a memory pressure, but wasn't able to drive workingSetBytes = usage - total_inactive_memory to 0. It went quite low, but not zero. If we have a repro for this, than the behavior in this repo is definitely incorrect.

If this is impossible, than he logic may be legit and there is a bug somewhere else. I am opening this issue for the discussion in case community wisdom can help understand this better.

/sig node

Environment:

  • Kubernetes distribution (GKE, EKS, Kubeadm, the hard way, etc.):

  • Container Network Setup (flannel, calico, etc.):

  • Kubernetes version (use kubectl version):

  • Metrics Server manifest

spoiler for Metrics Server manifest:
  • Kubelet config:
spoiler for Kubelet config:
  • Metrics server logs:
spoiler for Metrics Server logs:
  • Status of Metrics API:
spolier for Status of Metrics API:
kubectl describe apiservice v1beta1.metrics.k8s.io

/kind bug

@k8s-ci-robot k8s-ci-robot added sig/node Categorizes an issue or PR as relevant to SIG Node. kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Sep 15, 2023
@SergeyKanzhelev
Copy link
Member Author

@dashpole
Copy link

/assign @dgrisonnet
/triage accepted

Thanks @SergeyKanzhelev

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Sep 21, 2023
@kwiesmueller
Copy link
Member

I'd add the question if metrics-server should be the one deciding when a pod is terminated?
I would have assumed it determines that based on kubelet container status, not based on metrics.
Not sure if that's just not possible or why it wasn't chosen back then.

@SergeyKanzhelev
Copy link
Member Author

I think it is because the experience is based on parsing prometheus metrics. I'm not sure if pod status is available.

We had a short discussion with @mrunalp. We still need to understand exactly how this happens. Once we know, and we confirm that this is expected, we may need to expose additional metrics to help understand that that 0 means. But again, we will need to fully understand this first.

@pacoxu
Copy link
Member

pacoxu commented Oct 4, 2023

/cc @serathius

@serathius
Copy link
Contributor

Metrics server treats 0 in WSS as undefined behavior and skips reporting pod to avoid autoscaling to making a incorrect decision. Better to not make a decision then making a bad one.

@raywainman
Copy link

@serathius I think in principle that makes sense, however in practice we have seen a case where a single container in a pod with workingSetBytes = 0 (this container is not part of the main application and really is not relevant to the overall pod performance) is preventing the entire pod from being reported and therefore prevents autoscaling.

I'm curious if any progress has been made on the investigation here or if we have a theory as to how this can happen?

Maybe a middle ground here is to just drop the single bad container's metrics?

@serathius
Copy link
Contributor

serathius commented Oct 12, 2023

It's just a status quo, if someone can bring a nicely documented case that would show that kernel can report workingSetBytes=0 and that's a correct state we can switch it. We could also consider leaving the decision up to user.

On the other hand, what will HPA do if there are 10 pods under it and one of them has issue. Can hpa make a decision based on remaining 9 pods?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. sig/node Categorizes an issue or PR as relevant to SIG Node. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests

8 participants