New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docker does not free up disk space after container, volume and image removal #21925
Comments
Did you previously run using a different storage driver? If you did, it's possible that Note that the |
Nop, it has always been AUFS.
Yep, I realized after posting here that issue is only related to Devisemapper, sorry ^^ |
Might be worth checking if it's actually |
Yep, I already confirmed that :( To be more accurate, the folders growing in size are
Thanks. I'm already doing that. On each deployment, I:
Which is the reason why I don't understand why my disk space is decreasing over time :( |
Do the daemon logs show anything interesting (e.g. Docker failing to remove containers?). You've X-ed the amount of containers and images in your output, is that number going down after your cleanup scripts have run? Also note that you're running an outdated version of docker; if you want to stay on docker 1.8.x, you should at least update to docker 1.8.3 (which contains a security fix) |
No, everything seems to be normal. Plus, I keep loosing disk space while containers are up and running and without even deploying new containers.
Ah yeah, sorry for X-ing those numbers. They don't change at all, as I always deploy the same containers and clean the old ones each time I deploy. So, the number of containers and number of images remain the same as expected.
Yep, I'm better update, indeed. I was planning on updating to the latest version soon, but I will have to do it in the next 48 hours because my server is now running out of disk space :( |
Hi guys, Update to Docker 1.10 done. I used another instance to deploy my infra on top on Docker v1.10. Thus, I took that chance to investigate a little deeper on this space disk issue on the old server; the problem came from something within my infra, unrelated to Docker containers... Sorry for bothering :( |
@stouf good to hear you resolved your issue |
Thanks a lot for the support :) |
This issue and #3182 are marked as closed. However just today another user reported the problem remains. Please investigate. |
@awolfe-silversky Could you please describe the issue? As I said above, my problem wasn't related to containers or Device Mapper. It was a container in my infrastructure silently generating tons of logs that were never removed. |
@groyee I gave it a try on my side and had the same results; I only got 500MB freed by restarting the Docker daemon, but I have less than 10 containers running on the server I was testing. |
I have a similar problem where clearing out my volumes, images, and containers did not free up the disk space. I traced the culprit to this file, which is 96 gb. /Users/MyUserAccount/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2 However, looks like this is a known issue for Macs: |
I'm suffering similar problem on debian jessie. I freed ~400MB with a service restart, but have 2,1GB of old container garbage inside /var/lib/docker/aufs with just one container running |
confirming this issue. function usagesort {
local dir_to_list="$1"
cd "$dir_to_list"
du -h -d 1 | sort -k 1,1 -g
}
...
$ usagesort "$HOME/Library/Containers" | grep -i docker
43G ./com.docker.docker
276K ./com.docker.helper is there an official work-around ot this issue or better yet when are you planning to actually fix it? |
@mbana @gsccheng on OS X, that's unrelated to the issue reported here, and specific to Docker for Mac, see docker/for-mac#371 |
What is the solution here?
|
Is there already a solution for this issue?
/var/lib/docker/aufs takes a damn lot of space on my disk. There are no images and containers left anymore:
I don't get rid of it... without manually deleting it which I'm afraid of doing because I don't know what of that data is still needed. |
@HWiese1980 docker (up until docker 17.06) removed containers when docker 17.06 and up will (in the same situation), keep the container registered (in "dead" state), which allows you to remove the container (and layers) at a later stage. However if you've been running older versions of docker, and have a cleanup script that uses In your situation, it looks like there's no (or very little) data in the |
I can't add anything too intelligent to this but after a good amount of build testing my local storage became full so I tried to delete all images and containers and they were gone from Docker however the space wasn't reclaimed.
|
I think i got hit by the same thing, installed docker earlier today on this new laptop, so it was clean before, and built a few images to test, getting low on space, i took care on calling docker rm on any stopped docker (produced by my builds, never used
already restarted docker, didn't change anything, i think i'll remove everything ending with |
I'm wondering if a cause could be large static files I'm copying into my
containers (just a guess)
…On Fri, Aug 18, 2017 at 11:17 PM Gabriel Pettier ***@***.***> wrote:
I think i got hit by the same thing, installed docker earlier today on
this new laptop, so it was clean before, and built a few images to test,
getting low on space, i took care on calling docker rm on any stopped
docker (produced by my builds), and then docker rmi on all untagged images,
currently i have this
***@***.***:~> sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
buildozer latest fbcd2ca47e0b 3 hours ago 4.19GB
ubuntu 17.04 bde41be8de8c 4 weeks ago 93.1MB
19:22:44 18/08/17 red argv[1] 100% 59
***@***.***:~> sudo df -h /var/lib/docker/aufs
Sys. de fichiers Taille Utilisé Dispo Uti% Monté sur
/dev/nvme0n1p5 114G 111G 0 100% /var/lib/docker/aufs
19:23:08 18/08/17 red argv[1] 100% 25
***@***.***:~> sudo du -sh /var/lib/docker/aufs/diff
59G /var/lib/docker/aufs/diff
19:23:25 18/08/17 red argv[1] 100% 6115
***@***.***:~> sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
buildozer latest fbcd2ca47e0b 3 hours ago 4.19GB
ubuntu 17.04 bde41be8de8c 4 weeks ago 93.1MB
19:23:30 18/08/17 red argv[1] 100% 46
***@***.***:~> sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
19:23:33 18/08/17 red argv[1] 100% 43
***@***.***:~> sudo ls /var/lib/docker/aufs/diff|head
04fd10f50fe1d74a489268c9b2df95c579eb34c214f9a5d26c7077fbc3be0df4-init-removing
04fd10f50fe1d74a489268c9b2df95c579eb34c214f9a5d26c7077fbc3be0df4-removing
050edba704914b8317f0c09b9640c9e2995ffa403640a37ee77f5bf219069db3
059f9eee859b485926c3d60c3c0f690f45b295f0d499f188b7ad417ba8961083-init-removing
059f9eee859b485926c3d60c3c0f690f45b295f0d499f188b7ad417ba8961083-removing
09425940dd9d3e7201fb79f970d617c45435b41efdf331a5ad064be136d669b2-removing
0984c271bf1df9d3b16264590ab79bee1914b069b8959a9ade2fb93d8c3d1d9b-init-removing
0984c271bf1df9d3b16264590ab79bee1914b069b8959a9ade2fb93d8c3d1d9b-removing
0b082b302e8434d4743eb6e0ba04076c91fbd7295cc524653b2d313186d500fa-removing
0b11febcb2332657bd6bb3feedd404206c780e65bc40d580f9f4a77eb932d199-init-removing
19:23:57 18/08/17 red argv[1] 100% 35
***@***.***:~> sudo ls /var/lib/docker/aufs/diff|wc -l
256
already restarted docker, didn't change anything, i think i'll remove
everything ending with -removing in the diff/ directory, thankfully
nothing important depends on the docker images in this laptop, but still,
wouldn't like for this to happen on a server.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#21925 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AJNtJJ5rj0ILsCrezsuxQbJ00CrIhgZ6ks5sZcqjgaJpZM4IEOOm>
.
|
Have you tried |
I have a similar issue: https://stackoverflow.com/q/45798076/562769 |
It happened again for us. |
I've resolved this issue (when running WSL2) by manually recompacting the WSL2 ext4.vhdx size: |
I'm also having this issue on macOS Catalina 10.15.7 using centOS image. |
Similar issue with containers which were deleted, but that containers overlay2 directory still exists on the node.
This deleted all not used docker volumes and stopped containers including they overlay2 volume data.
We are using kubernetes which manages pods. In this case these are expired kubernetes jobs. Which in general deletes pods, hence containers, so I do not think that the problems is in kubernetes and rather this should be related to docker pods lifecycle. |
@fierman333 does restarting the daemon in such a situation clean up those files? My best guess would be if files were in use (or if there's mounts shared between namespaces), the daemon wasn't able to remove them. Possibly they're garbage-collected when the daemon is restarted (not 100% sure though). If mounts leaked to other namespaces (I know of situations where cAdvisor was a culprit there), things sometimes get nasty and (IIRC) only a restart of the host can release such mounts. |
@thaJeztah Tried it didn't help. Still that volume directory are in place for deleted containers.
-> NULL That pods are kubernetes jobs which are simple pods, without some volume configurations. Just do some job and stop the pod. We keep last 1 job execution, so on the next job execution the last one is deleted by Kubernetes. We notice this also on other nodes, not only this one:
|
I find the same problem in our systems using Debian 11 with Docker version 20.10.10, build b485636. I had to stop docker, remove /var/lib/docker/* and start again to clean more than 800GB of garbage that none of the "docker * prune" commands managed to remove. Also tried @thaJeztah suggestion of restarting without deleting in between and didn't work either. /var/lib/docker/overlay2 was taking most of the space. This is a CI server which creates thousands of containers and tens of images every day in parallel. |
Found a slightly less radical way of cleaning up space. It seems it does not require reinstalling docker-ce (maybe not even restart), but it still requires removing all images and containers as well as manually removing directories and files:
Not 100% sure "l" directory needs to exist or overlay2 storage backend will recreate it if it does not exist. If sha256 hashes are not removed "docker pull" will fail with
It looks like even if all images are removed hashes still exist. Maybe prune is failing to remove some image layers. Also tried with "docker image prune --force" and those hashes were not removed. Now everything is cleaned up but will do some more testing in a few hours/days when the problem will appear again. |
I faced exactly a similar issue as @albertca but on ubuntu 22.04 and smaller scaled project, I have also another issue that might be related.
|
I am having the same issue with ubuntu 2004
I suspect it is some undocumented mechanism from |
I noticed strange behavior ("data-root": "/mnt/docker-overlay"):
You can notice, that there is After this command:
We can see that only 929MB reclamed, but!
Here is
Also I noticed that
But
|
Seeing the same. I've no running containers, and after running every variant of
The only workaround I've found is to stop the docker daemon, forcibly remove |
Even after a complete reinstallation of docker, I am continuously running into space issues. Drive filled up and services crashed 3 times this week. I am developing a rather large VM container and the build cache does not clear (removing images does not free space either). Docker v24 / Ubuntu 20 Focal Fossa (5.15.0-89-generic #99~20.04.1-Ubuntu TYPE TOTAL ACTIVE SIZE RECLAIMABLE $ du -sh /var/lib/docker
Unfortunately, sudo service docker restart does not free the space either, even though I just ran docker image rm on about 60GB of images. docker buildx prune -f seems to be helping in removing the build cache, but it isn't ideal that I have to remove the only 20GB I care about (and wait hours to rebuild images I'm actively developing) to remove the 98% of garbage I don't want. |
@jackgray your issue does not look directly related to this ticket. This ticket is about cases where storage remains to be used after content is pruned / deleted (so Discussing all options would be out of scope for this bug report, but with BuildKit as builder, "build-cache" is separate from the image store itself, so removing images won't clean up the build cache ( You also may be interested in the daemon configuration for garbage-collecting of build-cache and retention policies that can be configured; https://docs.docker.com/build/cache/garbage-collection/ (but more in-depth discussion would probably be better for a discussion, either in the BuildKit repository (https://github.com/moby/buildkit/discussions) or this repository (https://github.com/moby/moby/discussions) |
@thaJeztah that was extremely thoughtful and helpful, thank you. I learned a lot about docker mechanics through this :) |
i'm still also seeing this:
is it possible that regularly running out of disk space might cause this? i'm working with dockerfiles that produce a huge amount of data and my machine runs out of disk space a lot, and i have to regularly prune the docker cache, but maybe docker is losing track of image data when it runs out of disk space during an operation? |
Similar issue here, I have ran
Adds up to 7.8GB
The
I am using ubuntu 22.04 LTS if it changes anything. |
Does that machine run many |
Thanks for the quick reply, I ran a build just once on this machine, never after. I installed using |
Oh, right, I just notice that in your case, there's still content in use (as reported by |
So, do you mean, I do not have any excess disk space being used? Sorry for any misunderstandings! |
Hard to tell. There for sure have been some cases where content wasn't properly cleaned up in some situations (the ticket I linked in my previous comment being one of them). That said, |
I've written and used this small bash script to try to detect layers in overlay2 directory that survived I've also had the issue on many machines, even in some cases where I'm not 100% sure it doesn't cause any trouble tho, so be careful while using it. I'd like to have a feedback on this as well |
I'm also hitting a case on a server where
|
@thaJeztah after 6 days with a freshly installed Docker 26.0.0 on Ubuntu 22.04 LTS, there's still ~20G difference between what Edit: the machines have nightly cron jobs that run
|
Versions & co
Docker
Docker version
Docker info:
$ docker info Containers: XXX Images: XXX Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: XXX Dirperm1 Supported: true Execution Driver: native-0.2 Logging Driver: json-file Kernel Version: 3.19.0-26-generic Operating System: Ubuntu 14.04.3 LTS CPUs: 1 Total Memory: XXX GiB Name: XXX ID: XXXX:XXXX:XXXX:XXXX
Operating system
Linux 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
Issue
Here is how I currently deploy my application:
1
docker rm -v xxxxx
docker rmi $(docker images -q)
However, little by little, I'm running out of disk space. I made sure I don't have any orphan volumes, unused containers and images, etc...
I found a post on a forum telling the following
My machine being a Linux hosted on AWS, I wonder if the kernel I'm using could be related to the issue I referenced above ?
If not, does any one has an idea about what could be the origin of this problem ? I spent the whole day looking for a solution, but could not find any so far :(
The text was updated successfully, but these errors were encountered: