Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to deal with the case that Docker data file size reaches the 100G threshold? #21611

Closed
surlymo opened this issue Mar 29, 2016 · 3 comments
Closed

Comments

@surlymo
Copy link

surlymo commented Mar 29, 2016

Output of docker version:

Client:
 Version:      1.8.2-el7.centos
 API version:  1.20
 Package Version: docker-1.8.2-10.el7.centos.x86_64
 Go version:   go1.4.2
 Git commit:   a01dc02/1.8.2
 Built:        
 OS/Arch:      linux/amd64

Server:
 Version:      1.8.2-el7.centos
 API version:  1.20
 Package Version: 
 Go version:   go1.4.2
 Git commit:   a01dc02/1.8.2
 Built:        
 OS/Arch:      linux/amd64

Output of docker info:

Containers: 0
Images: 130
Storage Driver: devicemapper
 Pool Name: docker-253:0-3221586422-pool
 Pool Blocksize: 65.54 kB
 Backing Filesystem: xfs
 Data file: /dev/loop4
 Metadata file: /dev/loop5
 Data Space Used: 107.4 GB
 Data Space Total: 107.4 GB
 Data Space Available: 0 B
 Metadata Space Used: 60.92 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.087 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.107-RHEL7 (2015-12-01)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.10.0-229.el7.x86_64
Operating System: CentOS Linux 7 (Core)
CPUs: 12
Total Memory: 31.2 GiB
Name: IP-5-14
ID: DNOS:FC2P:2WH4:OSYL:L2CH:U7HZ:MFL2:ZID3:SYTX:JWKP:TGIN:YYPB
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

Additional environment details (AWS, VirtualBox, physical, etc.):

Steps to reproduce the issue:
i wanna run my docker image called gateway using the cmd below.(a no-daemon entrypoint has been set in the gateway's Dockerfile)

docker run -d -P gateway

Describe the results you received:
the container can run as usual

Describe the results you expected:

[root@IP-5-14 devicemapper]# docker run -d -P gateway
Error response from daemon: Error running DeviceCreate (createSnapDevice) dm_task_run failed

Additional information you deem important (e.g. issue happens only occasionally):

  1. as the docker info shows, the data space has been raised up to the threshold 100G.
  2. when I delete the container using docker rm -f $(docker ps -a -q). All the containers had been removed but reported the error below:
Error response from daemon: Cannot destroy container 5d5eed10468b: Driver devicemapper failed to remove root filesystem 5d5eed10468b809475b0eb23bda167e1a962d32092a348936d56e27417dbf578: Error running DeleteDevice dm_task_run failed
  1. i used docker ps -a to ensure that all containers had been removed after i did the step 2. the cmd result is empty. but error occur(below) when i delete a NONE image
[root@IP-5-14 devicemapper]# docker rmi 6fdebd7b0eb5   
Error response from daemon: Conflict, cannot delete because 6fdebd7b0eb5 is held by an ongoing pull or build
Error: failed to remove images: [6fdebd7b0eb5]

Can someone give me a favor? Thx a lot

@ghost
Copy link

ghost commented Mar 29, 2016

You need to increase the pool allowed for your containers. To do this, you will need to remove your var/lib/docker which will destroy all your containers and images.

sudo service docker stop
sudo rm -rf /var/lib/docker
sudo dd if=/dev/zero of=/var/lib/docker/devicemapper/devicemapper/data bs=1G count=0 seek=300
This will make your data space used 300GB

Does this solve your issue? Also, try stopping docker processes, start docker again, and see if you can remove the images

@thaJeztah
Copy link
Member

Devicemapper running out of space can be very tricky indeed; We're tracking issues around this in #20272.

For Docker 1.11, there's a new option to specify a minimum amount of free space to be kept, and would prevent you from arriving in this situation; see #20786

I'm going to close this issue, because I think this is not a bug, but a support question, but feel free to continue the discussion here

@surlymo
Copy link
Author

surlymo commented Mar 31, 2016

okay. thx a lot. Hope for the Docker1.11's new features

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants