Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

failed to garbage collect required amount of images. Wanted to free 473842483 bytes, but freed 0 bytes #71869

Open
samuela opened this issue Dec 8, 2018 · 67 comments
Assignees
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. sig/node Categorizes an issue or PR as relevant to SIG Node. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@samuela
Copy link

samuela commented Dec 8, 2018

What happened: I've been seeing a number of evictions recently that appear to be due to disk pressure:

$$$ kubectl get pod kumo-go-api-d46f56779-jl6s2 --namespace=kumo-main -o yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: 2018-12-06T10:05:25Z
  generateName: kumo-go-api-d46f56779-
  labels:
    io.kompose.service: kumo-go-api
    pod-template-hash: "802912335"
  name: kumo-go-api-d46f56779-jl6s2
  namespace: kumo-main
  ownerReferences:
  - apiVersion: extensions/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: kumo-go-api-d46f56779
    uid: c0a9355e-f780-11e8-b336-42010aa80057
  resourceVersion: "11617978"
  selfLink: /api/v1/namespaces/kumo-main/pods/kumo-go-api-d46f56779-jl6s2
  uid: 7337e854-f93e-11e8-b336-42010aa80057
spec:
  containers:
  - env:
    - redacted...
    image: gcr.io/<redacted>/kumo-go-api@sha256:c6a94fc1ffeb09ea6d967f9ab14b9a26304fa4d71c5798acbfba5e98125b81da
    imagePullPolicy: Always
    name: kumo-go-api
    ports:
    - containerPort: 5000
      protocol: TCP
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-t6jkx
      readOnly: true
  dnsPolicy: ClusterFirst
  nodeName: gke-kumo-customers-n1-standard-1-pree-0cd7990c-jg9s
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: default-token-t6jkx
    secret:
      defaultMode: 420
      secretName: default-token-t6jkx
status:
  message: 'The node was low on resource: nodefs.'
  phase: Failed
  reason: Evicted
  startTime: 2018-12-06T10:05:25Z

Taking a look at kubectl get events, I see these warnings:

$$$ kubectl get events
LAST SEEN   FIRST SEEN   COUNT     NAME                                                                   KIND      SUBOBJECT   TYPE      REASON          SOURCE                                                         MESSAGE
2m          13h          152       gke-kumo-customers-n1-standard-1-pree-0cd7990c-jg9s.156e07f40b90ed91   Node                  Warning   ImageGCFailed   kubelet, gke-kumo-customers-n1-standard-1-pree-0cd7990c-jg9s   (combined from similar events): failed to garbage collect required amount of images. Wanted to free 473948979 bytes, but freed 0 bytes
37m         37m          1         gke-kumo-customers-n1-standard-1-pree-0cd7990c-jg9s.156e3127ebc715c3   Node                  Warning   ImageGCFailed   kubelet, gke-kumo-customers-n1-standard-1-pree-0cd7990c-jg9s   failed to garbage collect required amount of images. Wanted to free 473674547 bytes, but freed 0 bytes

Digging a bit deeper:

$$$ kubectl get event gke-kumo-customers-n1-standard-1-pree-0cd7990c-jg9s.156e07f40b90ed91 -o yaml
apiVersion: v1
count: 153
eventTime: null
firstTimestamp: 2018-12-07T11:01:06Z
involvedObject:
  kind: Node
  name: gke-kumo-customers-n1-standard-1-pree-0cd7990c-jg9s
  uid: gke-kumo-customers-n1-standard-1-pree-0cd7990c-jg9s
kind: Event
lastTimestamp: 2018-12-08T00:16:09Z
message: '(combined from similar events): failed to garbage collect required amount
  of images. Wanted to free 474006323 bytes, but freed 0 bytes'
metadata:
  creationTimestamp: 2018-12-07T11:01:07Z
  name: gke-kumo-customers-n1-standard-1-pree-0cd7990c-jg9s.156e07f40b90ed91
  namespace: default
  resourceVersion: "381976"
  selfLink: /api/v1/namespaces/default/events/gke-kumo-customers-n1-standard-1-pree-0cd7990c-jg9s.156e07f40b90ed91
  uid: 65916e4b-fa0f-11e8-ae9a-42010aa80058
reason: ImageGCFailed
reportingComponent: ""
reportingInstance: ""
source:
  component: kubelet
  host: gke-kumo-customers-n1-standard-1-pree-0cd7990c-jg9s
type: Warning

There's actually remarkably little here. This message doesn't say anything regarding why ImageGC was initiated or why it was unable recover more space.

What you expected to happen: Image GC to work correctly, or at least fail to schedule pods onto nodes that do not have sufficient disk space.

How to reproduce it (as minimally and precisely as possible): Run and stop as many pods as possible on a node in order to encourage disk pressure. Then observe these errors.

Anything else we need to know?: n/a

Environment:

  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.7", GitCommit:"0c38c362511b20a098d7cd855f1314dad92c2780", GitTreeState:"clean", BuildDate:"2018-08-20T10:09:03Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.7-gke.11", GitCommit:"fa90543563c9cfafca69128ce8cd9ecd5941940f", GitTreeState:"clean", BuildDate:"2018-11-08T20:22:21Z", GoVersion:"go1.9.3b4", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration: GKE
  • OS (e.g. from /etc/os-release): I'm running macOS 10.14, nodes are running Container-Optimized OS (cos).
  • Kernel (e.g. uname -a): Darwin D-10-19-169-80.dhcp4.washington.edu 18.0.0 Darwin Kernel Version 18.0.0: Wed Aug 22 20:13:40 PDT 2018; root:xnu-4903.201.2~1/RELEASE_X86_64 x86_64
  • Install tools: n/a
  • Others: n/a

/kind bug

@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Dec 8, 2018
@samuela
Copy link
Author

samuela commented Dec 8, 2018

/sig gcp

@k8s-ci-robot k8s-ci-robot added sig/gcp and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Dec 8, 2018
@samuela
Copy link
Author

samuela commented Dec 9, 2018

I just upgraded my master version and nodes to 1.11.3-gke.18 to see if that would be any help, but I'm still seeing the exact same thing.

@samuela
Copy link
Author

samuela commented Dec 9, 2018

FWIW "Boot disk size in GB (per node)" was set to the minimum, 10 Gb.

@hgokavarapuz
Copy link

@samuela any update on the issue ? I see the same problem.

@samuela
Copy link
Author

samuela commented Dec 12, 2018

@hgokavarapuz No update as far as I've heard. Def seems like a serious issue for GKE.

@hgokavarapuz
Copy link

@samuela I saw this issue on AWS but was able to work around by using a different AMI. Have to check what is the difference in the AMI though that it causes it to happen.

@samuela
Copy link
Author

samuela commented Dec 12, 2018

@hgokavarapuz Interesting... maybe this has something to do with the node OS/setup then.

@hgokavarapuz
Copy link

hgokavarapuz commented Dec 12, 2018 via email

@dims
Copy link
Member

dims commented Dec 26, 2018

@hgokavarapuz check the kubelet logs for clues

@hgokavarapuz
Copy link

I was able to fix mine it was an issue with the AMI that I was using which has /var folder mounted to an EBS volume with some restricted size, causing the problem with Docker containers creation. Was not directly obvious from the logs but checking the space and other things made it clear.

@samuela
Copy link
Author

samuela commented Dec 27, 2018

@hgokavarapuz Are you sure that this actually fixes the problem and doesn't just require more image downloads for the bug to occur?

In my case this was happening within the GKE allowed disk sizes, so I'd say there's definitely still some sort of bug in GKE here at least.

It would also be good to have some sort of official position on the minimum disk size required in order to run kubernetes on a node without getting this error. Otherwise it's not clear exactly how large the volumes must be in order to be within spec for running kubernetes.

@hgokavarapuz
Copy link

@samuela I haven't tried on GKE but that was the issue on the AWS with some of the AMI's. Maybe that there is an issue with GKE.

@kyessenov
Copy link

We're hitting something similar on GKE v1.11.5-gke.4. There seems to be some issue with GC not keeping up, as seen by the following events:

Events:
  Type     Reason                 Age                 From                                               Message
  ----     ------                 ----                ----                                               -------
  Warning  FreeDiskSpaceFailed    47m                 kubelet, gke-v11-service-graph-pool-c6e93d11-k6h6  failed to garbage collect required amount of images. Wanted to free 758374400 bytes, but freed 375372075 bytes
  Warning  FreeDiskSpaceFailed    42m                 kubelet, gke-v11-service-graph-pool-c6e93d11-k6h6  failed to garbage collect required amount of images. Wanted to free 898760704 bytes, but freed 0 bytes
  Warning  ImageGCFailed          42m                 kubelet, gke-v11-service-graph-pool-c6e93d11-k6h6  failed to garbage collect required amount of images. Wanted to free 898760704 bytes, but freed 0 bytes
  Normal   NodeHasDiskPressure    37m                 kubelet, gke-v11-service-graph-pool-c6e93d11-k6h6  Node gke-v11-service-graph-pool-c6e93d11-k6h6 status is now: NodeHasDiskPressure
  Warning  FreeDiskSpaceFailed    37m                 kubelet, gke-v11-service-graph-pool-c6e93d11-k6h6  failed to garbage collect required amount of images. Wanted to free 1430749184 bytes, but freed 0 bytes
  Warning  ImageGCFailed          37m                 kubelet, gke-v11-service-graph-pool-c6e93d11-k6h6  failed to garbage collect required amount of images. Wanted to free 1430749184 bytes, but freed 0 bytes
  Warning  EvictionThresholdMet   36m (x21 over 37m)  kubelet, gke-v11-service-graph-pool-c6e93d11-k6h6  Attempting to reclaim ephemeral-storage
  Warning  ImageGCFailed          32m                 kubelet, gke-v11-service-graph-pool-c6e93d11-k6h6  failed to garbage collect required amount of images. Wanted to free 1109360640 bytes, but freed 0 bytes
  Warning  FreeDiskSpaceFailed    27m                 kubelet, gke-v11-service-graph-pool-c6e93d11-k6h6  failed to garbage collect required amount of images. Wanted to free 1367126016 bytes, but freed 0 bytes
  Warning  ImageGCFailed          22m                 kubelet, gke-v11-service-graph-pool-c6e93d11-k6h6  failed to garbage collect required amount of images. Wanted to free 1885589504 bytes, but freed 0 bytes
  Warning  FreeDiskSpaceFailed    17m                 kubelet, gke-v11-service-graph-pool-c6e93d11-k6h6  failed to garbage collect required amount of images. Wanted to free 2438008832 bytes, but freed 0 bytes
  Warning  FreeDiskSpaceFailed    12m                 kubelet, gke-v11-service-graph-pool-c6e93d11-k6h6  failed to garbage collect required amount of images. Wanted to free 2223022080 bytes, but freed 0 bytes
  Warning  ImageGCFailed          7m                  kubelet, gke-v11-service-graph-pool-c6e93d11-k6h6  failed to garbage collect required amount of images. Wanted to free 2358378496 bytes, but freed 0 bytes
  Normal   NodeHasNoDiskPressure  2m (x4 over 4h)     kubelet, gke-v11-service-graph-pool-c6e93d11-k6h6  Node gke-v11-service-graph-pool-c6e93d11-k6h6 status is now: NodeHasNoDiskPressure

Scanning the kubelet logs, I see the following entries:

Feb 07 21:15:31 gke-v11-service-graph-pool-c6e93d11-k6h6 kubelet[1594]: I0207 21:15:31.447179    1594 image_gc_manager.go:300] [imageGCManager]: Disk usage on image filesystem is at 99% which is over the high threshold (85%). Trying to free 2358378496 byte
Feb 07 21:15:31 gke-v11-service-graph-pool-c6e93d11-k6h6 kubelet[1594]: E0207 21:15:31.452366    1594 kubelet.go:1253] Image garbage collection failed multiple times in a row: failed to garbage collect required amount of images. Wanted to free 2358378496 b
Feb 07 21:15:31 gke-v11-service-graph-pool-c6e93d11-k6h6 kubelet[1594]: I0207 21:15:31.711566    1594 kuberuntime_manager.go:513] Container {Name:metadata-agent Image:gcr.io/stackdriver-agents/stackdriver-metadata-agent:0.2-0.0.21-1 Command:[] Args:[-o Kub
Feb 07 21:15:32 gke-v11-service-graph-pool-c6e93d11-k6h6 kubelet[1594]: I0207 21:15:32.004882    1594 cloud_request_manager.go:89] Requesting node addresses from cloud provider for node "gke-v11-service-graph-pool-c6e93d11-k6h6"
Feb 07 21:15:32 gke-v11-service-graph-pool-c6e93d11-k6h6 kubelet[1594]: I0207 21:15:32.008529    1594 cloud_request_manager.go:108] Node addresses from cloud provider for node "gke-v11-service-graph-pool-c6e93d11-k6h6" collected
Feb 07 21:15:34 gke-v11-service-graph-pool-c6e93d11-k6h6 kubelet[1594]: I0207 21:15:34.817530    1594 kube_docker_client.go:348] Stop pulling image "gcr.io/stackdriver-agents/stackdriver-logging-agent:0.8-1.6.2-1": "e807eb07af89: Extracting [==============
Feb 07 21:15:34 gke-v11-service-graph-pool-c6e93d11-k6h6 kubelet[1594]: E0207 21:15:34.817616    1594 remote_image.go:108] PullImage "gcr.io/stackdriver-agents/stackdriver-logging-agent:0.8-1.6.2-1" from image service failed: rpc error: code = Unknown desc
Feb 07 21:15:34 gke-v11-service-graph-pool-c6e93d11-k6h6 kubelet[1594]: E0207 21:15:34.817823    1594 kuberuntime_manager.go:733] container start failed: ErrImagePull: rpc error: code = Unknown desc = failed to register layer: Error processing tar file(exi
Feb 07 21:15:35 gke-v11-service-graph-pool-c6e93d11-k6h6 kubelet[1594]: W0207 21:15:35.057924    1594 kubelet_getters.go:264] Path "/var/lib/kubelet/pods/652e958e-2b1d-11e9-827c-42010a800fdc/volumes" does not exist
Feb 07 21:15:35 gke-v11-service-graph-pool-c6e93d11-k6h6 kubelet[1594]: I0207 21:15:35.058035    1594 eviction_manager.go:400] eviction manager: pods fluentd-gcp-v3.1.1-spdfd_kube-system(652e958e-2b1d-11e9-827c-42010a800fdc) successfully cleaned up
Feb 07 21:15:35 gke-v11-service-graph-pool-c6e93d11-k6h6 kubelet[1594]: E0207 21:15:35.091740    1594 pod_workers.go:186] Error syncing pod 7e06145a-2b1d-11e9-827c-42010a800fdc ("fluentd-gcp-v3.1.1-bgdg6_kube-system(7e06145a-2b1d-11e9-827c-42010a800fdc)"),
Feb 07 21:15:35 gke-v11-service-graph-pool-c6e93d11-k6h6 kubelet[1594]: W0207 21:15:35.179545    1594 eviction_manager.go:329] eviction manager: attempting to reclaim ephemeral-storage

It seems like something is holding the GC to reclaim the storage fast enough. The node looks like it eventually recovers, but some pods get evicted in the process.

@angegar
Copy link

angegar commented Feb 22, 2019

I am encountering the same issue. I deployed the stack with kops on AWS and my k8s version is 1.11.6. The problem is that i have an application downtime ones per week when the disk pressure happened.

@deathrizzo
Copy link

same issue here. I extended the ebs volumes thinking that would fix it.
using
ami k8s-1.10-debian-jessie-amd64-hvm-ebs-2018-08-17 (ami-009b9699070ffc46f)

@widgetpl
Copy link

widgetpl commented May 8, 2019

I faced similar issue but on AKS. When we scale down cluster with az cli and then up I would think that new nodes are clean, I mean without any garbage but

$ kubectl get no
NAME                       STATUS   ROLES   AGE   VERSION
aks-agentpool-11344223-0   Ready    agent   77d   v1.12.4
aks-agentpool-11344223-1   Ready    agent   9h    v1.12.4
aks-agentpool-11344223-2   Ready    agent   9h    v1.12.4
aks-agentpool-11344223-3   Ready    agent   9h    v1.12.4
aks-agentpool-11344223-4   Ready    agent   9h    v1.12.4
aks-agentpool-11344223-5   Ready    agent   9h    v1.12.4

and when I ssh into one of them I can see plenty of old images like

$ docker images | grep addon-resizer
k8s.gcr.io/addon-resizer                               1.8.4               5ec630648120        6 months ago        38.3MB
k8s.gcr.io/addon-resizer                               1.8.1               6c0dbeaa8d20        17 months ago       33MB
k8s.gcr.io/addon-resizer                               1.7                 9b0815c87118        2 years ago         39MB

or

$ docker images | grep k8s.gcr.io/cluster-autoscaler
k8s.gcr.io/cluster-autoscaler                          v1.14.0             ef6c40006faf        7 weeks ago         142MB
k8s.gcr.io/cluster-autoscaler                          v1.13.2             0f47d27d8e0d        2 months ago        137MB
k8s.gcr.io/cluster-autoscaler                          v1.12.3             9119261ec106        2 months ago        232MB
k8s.gcr.io/cluster-autoscaler                          v1.3.7              c711df426ac6        2 months ago        217MB
k8s.gcr.io/cluster-autoscaler                          v1.12.2             d67faca6c0aa        3 months ago        232MB
k8s.gcr.io/cluster-autoscaler                          v1.13.1             39c073d73c1e        5 months ago        137MB
k8s.gcr.io/cluster-autoscaler                          v1.3.4              6168be341178        6 months ago        217MB
k8s.gcr.io/cluster-autoscaler                          v1.3.3              bd9362bb17a5        7 months ago        217MB
k8s.gcr.io/cluster-autoscaler                          v1.2.2              2378f4474aa3        11 months ago       209MB
k8s.gcr.io/cluster-autoscaler                          v1.1.2              e137f4b4d451        14 months ago       198MB

which is crazy as this as I see plenty of below errors

  Type     Reason               Age    From                               Message
  ----     ------               ----   ----                               -------
  Warning  FreeDiskSpaceFailed  15m    kubelet, aks-agentpool-11344223-5  failed to garbage collect required amount of images. Wanted to free 1297139302 bytes, but freed 0 bytes
  Warning  FreeDiskSpaceFailed  10m    kubelet, aks-agentpool-11344223-5  failed to garbage collect required amount of images. Wanted to free 1447237222 bytes, but freed 0 bytes
  Warning  ImageGCFailed        10m    kubelet, aks-agentpool-11344223-5  failed to garbage collect required amount of images. Wanted to free 1447237222 bytes, but freed 0 bytes

@k8s-ci-robot k8s-ci-robot added area/provider/gcp Issues or PRs related to gcp provider needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. and removed sig/gcp labels Aug 6, 2019
@k8s-ci-robot
Copy link
Contributor

@samuela: There are no sig labels on this issue. Please add a sig label by either:

  1. mentioning a sig: @kubernetes/sig-<group-name>-<group-suffix>
    e.g., @kubernetes/sig-contributor-experience-<group-suffix> to notify the contributor experience sig, OR

  2. specifying the label manually: /sig <group-name>
    e.g., /sig scalability to apply the sig/scalability label

Note: Method 1 will trigger an email to the group. See the group list.
The <group-suffix> in method 1 has to be replaced with one of these: bugs, feature-requests, pr-reviews, test-failures, proposals.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@rubencabrera
Copy link

I am hitting this on Openstack using v1.11.10

Node is completely out of disk space and kubelet logs are now a loop of:

E1029 06:41:37.397348    8907 remote_runtime.go:278] ContainerStatus "redacted" from runtime service failed: rpc error: code = Unknown desc = unable to inspect docker image "sha256:redacted" while inspecting docker container "redacted": no such image: "sha256:redacted"
Oct 29 06:41:37 node-name bash[8907]: E1029 06:41:37.397378    8907 kuberuntime_container.go:391] ContainerStatus for redacted error: rpc error: code = Unknown desc = unable to inspect docker image "sha256:redacted" while inspecting docker container "redacted": no such image: "sha256:redacted"
Oct 29 06:41:37 node-name bash[8907]: E1029 06:41:37.397388    8907 kuberuntime_manager.go:873] getPodContainerStatuses for pod "coredns-49t6c_kube-system(redacted)" failed: rpc error: code = Unknown desc = unable to inspect docker image "sha256:redacted" while inspecting docker container "redacted": no such image: "sha256:redacted"
Oct 29 06:41:37 node-name bash[8907]: E1029 06:41:37.397404    8907 generic.go:241] PLEG: Ignoring events for pod coredns-49t6c/kube-system: rpc error: code = Unknown desc = unable to inspect docker image "sha256:redacted" while inspecting docker container "redacted": no such image: "sha256:redacted"

@rubencabrera
Copy link

The issue for me was caused by a container taking a lot of disk space in a short amount of time. This happened in multiple nodes. The container was evicted (every pod in the node was), but the disk was not reclaimed by kubelet.

I had to du /var/lib/docker/overlay -h | sort -h in order to find which containers were doing this and manually delete them. This brought the nodes out of Disk Pressure and they recovered (one of them needed a reboot -f).

@dat-timfa
Copy link

This is happening for me too. I have 8 nodes in an EKS cluster, and for some reason only one node is having this GC issue. This has happened twice, and the below steps are what I've done to fix the issue. Does anyone know of a better / supported method for doing this? https://kubernetes.io/docs/tasks/administer-cluster/cluster-management/#maintenance-on-a-node

  1. Increase autoscaling group for EKS by +1 (replacement node for the bad one)
  2. Cordon off the bad node (kubectl cordon)
  3. Drain the bad node (kubectl drain), to kick the pods off this node and onto one of the other nodes
  4. Add downscaling protection all nodes except the bad node
  5. Decrease the autoscaling group for EKS by -1 (this deletes the bad node, since it's the only one not protected)
  6. Remove downscaling protection from all nodes

@KIVagant
Copy link

Faced the same problem.

kubectl drain --delete-local-data --ignore-daemonsets $NODE_IP && kubectl uncordon $NODE_IP was enough to clean the disk storage.

@HayTran94
Copy link

FWIW "Boot disk size in GB (per node)" was set to the minimum, 10 Gb.

Thank you very much. It worked with me

@PhilipSchmid
Copy link

I just had the same issue on a customers' RKE2 (v1.23.6+rke2r2) K8s 1.23.6 cluster. The errors in the Kubelet log were:

I0607 16:27:03.708243    7302 image_gc_manager.go:310] "Disk usage on image filesystem is over the high threshold, trying to free bytes down to the low threshold" usage=89 highThreshold=85 amountToFree=4305076224
lowThreshold=80
E0607 16:27:03.710093    7302 kubelet.go:1347] "Image garbage collection failed multiple times in a row" err="failed to garbage collect required amount of images. Wanted to free 4305076224 bytes, but freed 0 bytes
"

The actual problem was caused by an "underlying" full /var partition. Originally, they once had Longhorn writing data to /var/lib/longhorn on a dedicated /var partition which then, at some point, nearly run full. To resolve this, they simply added another disk (& therefore partition) explicitly for /var/lib/longhorn (without deleting the old data on the original /var partition /var/lib/longhorn-directory before). So, a df -i or df -h showed quite some space/inodes available, but only because /var/lib/longhorn actually was a new partition/mount meanwhile.

To actually resolve this issue without downtime or any impact on the data read/write operations in the /var/lib/longhorn directory, we chose to bind mount the original /var partition and then cleaned the original /var/lib/longhorn/* data.

mkdir /mnt/temp-root
mount --bind /var /mnt/temp-root
ls -la /mnt/temp-root/lib/longhorn
rm -rf /mnt/temp-root/lib/longhorn/*
umount /mnt/temp-root
rmdir /mnt/temp-root

Source and explanation of a "bind mount": https://unix.stackexchange.com/questions/198590/what-is-a-bind-mount

Thanks, @andrecp, for the hint regarding "dropped mounts" (#71869 (comment))!!

Regards,
Philip

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 5, 2022
@mcorralb
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 16, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 8, 2023
@XDRAGON2002
Copy link
Member

@SergeyKanzhelev is the issue still up for grabs? I'm a new contributor and though I might need help in getting this implemented, I would still like to give this a shot if available.

@alam0rt
Copy link

alam0rt commented Apr 4, 2023

@SergeyKanzhelev is the issue still up for grabs? I'm a new contributor and though I might need help in getting this implemented, I would still like to give this a shot if available.

feel free to take it! i took this ages ago, but wasn't knowledgeable enough about how this worked / go in general.

@alam0rt alam0rt removed their assignment Apr 4, 2023
@pavanjoshi914
Copy link

/assign

@pavanjoshi914 pavanjoshi914 removed their assignment Apr 7, 2023
@boopzz
Copy link

boopzz commented May 11, 2023

I'm having some Garbage Collection and disk space issues too (I run microk8s). I've changed the settings for image-gc-high-threshold and image-gc-low-threshold, I'm looking now at changing maximum-dead-containers as the default is supposedly -1 which means infinite. This sounds like by default Garbage Collection isn't really setup. This seems crazy! Im happy to be told otherwise. Would this (and theres a lot of posts on the microk8s repo too) ticket be resolved by simply having sane defaults for garbage collection?

It sounds like a not ideal default garbage collection would be better than none! I've also had this problem with docker in the past just keeping endless ephemeral disks after updating a container.

@haitch
Copy link

haitch commented May 12, 2023

/assign

@ngeorger
Copy link

ngeorger commented Jul 7, 2023

OI had the same problem and it was caused by the bad idea of mounting a gcs bucket with fuse as a storage for k3s. Unmounting the bucket and rest<rting the node solved the problem for me.

Used gcsfuse to mount the bucket, it's a really useful tool but not for high r/w demands, ideal for backups maybe but isolated or managed not as a block stoprage.

Regards from Chile.

@Cool-Coder174
Copy link

/assign

@vaibhav2107
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 3, 2023
@ghost
Copy link

ghost commented Feb 20, 2024

/assign

@k8s-ci-robot k8s-ci-robot assigned ghost Feb 20, 2024
@ghost
Copy link

ghost commented Feb 20, 2024

/unassign

@ghost ghost removed their assignment Feb 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. sig/node Categorizes an issue or PR as relevant to SIG Node. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects