Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discard all data on devicemapper devices when deleting them #3256

Merged
merged 1 commit into from Jan 18, 2014

Conversation

alexlarsson
Copy link
Contributor

This works around the fact that deleting a device in a thin pool
doesn't discard the free space. Unfortunately even this is not perfect,
as it seems discards are respected only for blocks that has never been
shared in the thin device code. However, this has been fixed in the
upstream kernel device-mapper tree:

http://git.kernel.org/cgit/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=for-next&id=0ab1c92ff748b745c1ed7cde31bb37ad2c5f901a

When this hits the kernel I belive this will fully return space
for removed images/containers to the host FS. For now it only
helps partially (which is better than nothing).

@alexlarsson
Copy link
Contributor Author

This should help with #3182

@crosbymichael
Copy link
Contributor

ping @creack

@crosbymichael
Copy link
Contributor

@alexlarsson I'm getting a test failure with this:

++ cd ./graphdriver/devmapper
++ go test -cover -coverprofile /go/src/github.com/dotcloud/docker/bundles/0.7.2-dev/test/coverprofiles/docker-graphdriver-devmapper -ldflags '-X main.GITCOMMIT "d1c
ac07" -X main.VERSION "0.7.2-dev" -w -X github.com/dotcloud/docker/utils.IAMSTATIC true -linkmode external -extldflags "-lpthread -static -Wl,--unresolved-symbols=ig
nore-in-object-files"' -tags netgo -a
--- FAIL: TestDriverRemove (0.00 seconds)
        driver_test.go:145: Unexpected keys: map[DmTaskSetCookie:true DmTaskSetTarget:true DmTaskSetAddNode:true DmUdevWait:true]
FAIL
exit status 1
FAIL    github.com/dotcloud/docker/graphdriver/devmapper        0.014s

@crosbymichael
Copy link
Contributor

LGTM

@@ -58,3 +58,14 @@ func ioctlBlkGetSize64(fd uintptr) (int64, error) {
}
return size, nil
}

func ioctlBlkDiscard(fd uintptr, offset, length uint64) error {
var r [2]uint64
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Small detail: you could have used this syntax instead.

var r = [2]uint64{
    0: offset,
    1: length,
 }

@creack
Copy link
Contributor

creack commented Jan 3, 2014

Tentatively Scheduling for 0.7.6

@unclejack
Copy link
Contributor

@alexlarsson Can you sign the commits, please?

This works around the fact that deleting a device in a thin pool
doesn't discard the free space. Unfortunately even this is not perfect,
as it seems discards are respected only for blocks that has never been
shared in the thin device code. However, this has been fixed in the
upstream kernel device-mapper tree:

http://git.kernel.org/cgit/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=for-next&id=0ab1c92ff748b745c1ed7cde31bb37ad2c5f901a

When this hits the kernel I belive this will fully return space
for removed images/containers to the host FS. For now it only
helps partially (which is better than nothing).

Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
@alexlarsson
Copy link
Contributor Author

Squashed the commits and signed thm

@unclejack
Copy link
Contributor

LGTM

crosbymichael added a commit that referenced this pull request Jan 18, 2014
Discard all data on devicemapper devices when deleting them
@crosbymichael crosbymichael merged commit a60f0a0 into moby:master Jan 18, 2014
@alexlarsson alexlarsson deleted the blkdiscard branch March 28, 2014 09:09
@agrebin
Copy link

agrebin commented Apr 18, 2016

How can i determine if my docker version holds this fix?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants