-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
with kubenetes 1.24 metrics-server:v0.6.1 not collecting correct memory utilization #1349
Comments
/assign @CatherineF-dev |
/assign @dgrisonnet |
QQ: what is the total node memory? |
8GB |
QQ: does 8GB mean both
|
free -m looks correct but kubectl top no not looks match with free -m |
Could you sum up all memory of pods running on the same node? Use What is this value? |
you can see below only 9 pods are running on this node and utilization sum 1116 MB total
|
Hi any suggestion on this? kubectl top node saying that 5079 Mi memory is in used but free command shows that that only 1322 Mi is being used which one we should consider ? Or I am doing mistake to understand this kubectl top node output ? |
Could you try scraping kubelet summary api metrics, which are the data source for metrics-server?
https://github.com/kubernetes-sigs/metrics-server/blob/master/KNOWN_ISSUES.md |
Free -m showing less memory utilization but kubectl showing high memory utilization
Kubenetes version: 1.24.2
metrics-server: v0.6.1
free -m
total used free shared buff/cache available
Mem: 7593 3264 160 7 4168 4018
kubectl top no
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
10.231.11.100 750m 18% 5543Mi 76%
These are few pods running on this node
NAMESPACE NAME CPU(cores) MEMORY(bytes)
default xxxxxxxxxxxxxx 3m 233Mi
default xxxxxxxxxxxxxxx 3m 254Mi
default xxxxxxxxxxxxxxxx 370m 909Mi
default xxxxxxxxxxxxxxxxxxxx 5m 82Mi
default xxxxxxxxxxxxxxxxxxxxx 1m 1Mi
default xxxxxxx 4m 89Mi
dynatrace xxxxxxxxxxxxxxactivegate-0 66m 712Mi
dynatrace xxxxxxxxxxxxxxoneagent-m8sbs 53m 385Mi
kube-system kube-flannel-ds-24mtl 3m 20Mi
kube-system node-local-dns-pfmns 3m 19Mi
xxxxxx xxxxxxx 1m 38Mi
The text was updated successfully, but these errors were encountered: