Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

with kubenetes 1.24 metrics-server:v0.6.1 not collecting correct memory utilization #1349

Open
bpsingh11oct85 opened this issue Oct 18, 2023 · 10 comments
Assignees
Labels
triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@bpsingh11oct85
Copy link

Free -m showing less memory utilization but kubectl showing high memory utilization
Kubenetes version: 1.24.2
metrics-server: v0.6.1

free -m
total used free shared buff/cache available
Mem: 7593 3264 160 7 4168 4018

kubectl top no
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
10.231.11.100 750m 18% 5543Mi 76%

These are few pods running on this node

NAMESPACE NAME CPU(cores) MEMORY(bytes)
default xxxxxxxxxxxxxx 3m 233Mi
default xxxxxxxxxxxxxxx 3m 254Mi
default xxxxxxxxxxxxxxxx 370m 909Mi
default xxxxxxxxxxxxxxxxxxxx 5m 82Mi
default xxxxxxxxxxxxxxxxxxxxx 1m 1Mi
default xxxxxxx 4m 89Mi
dynatrace xxxxxxxxxxxxxxactivegate-0 66m 712Mi
dynatrace xxxxxxxxxxxxxxoneagent-m8sbs 53m 385Mi
kube-system kube-flannel-ds-24mtl 3m 20Mi
kube-system node-local-dns-pfmns 3m 19Mi
xxxxxx xxxxxxx 1m 38Mi

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Oct 18, 2023
@dashpole
Copy link

/assign @CatherineF-dev
/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Oct 19, 2023
@dashpole
Copy link

/assign @dgrisonnet

@CatherineF-dev
Copy link
Contributor

QQ: what is the total node memory?
Is it 5543Mi / 76% = 7293?

@bpsingh11oct85
Copy link
Author

8GB

@CatherineF-dev
Copy link
Contributor

QQ: does 8GB mean both free -m and kubectl top no are not accurate?

  • free -m gave 7593 instead of 8*1024
  • kubectl top no gave 5543/0.76 = 7293

@bpsingh11oct85
Copy link
Author

free -m looks correct but kubectl top no not looks match with free -m

@CatherineF-dev
Copy link
Contributor

Could you sum up all memory of pods running on the same node? Use kubectl top pods + grep.

What is this value?

@bpsingh11oct85
Copy link
Author

you can see below only 9 pods are running on this node and utilization sum 1116 MB total
37
106
101
1
46
695
31
58
41

1116

kubectl top no |grep -w 10.231.11.21
10.x.x.x 102m 2% 5079Mi 69%

in above matrix server saying 69% utilization of memory and see below OS is saying 5962 is available memory

                        total            used        free      shared    buff/cache       available

Mem: 7593 1322 334 7 5935 5962
Swap: 0 0 0

@bpsingh11oct85
Copy link
Author

Hi any suggestion on this? kubectl top node saying that 5079 Mi memory is in used but free command shows that that only 1322 Mi is being used which one we should consider ?

Or I am doing mistake to understand this kubectl top node output ?

@CatherineF-dev
Copy link
Contributor

CatherineF-dev commented Jan 5, 2024

Could you try scraping kubelet summary api metrics, which are the data source for metrics-server?

# ssh into this node
curl -k http://node_ip:10255/stats/summary # port will be 10250 if 10255 is not open

# then check node memory from the above

https://github.com/kubernetes-sigs/metrics-server/blob/master/KNOWN_ISSUES.md

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests

5 participants