-
Notifications
You must be signed in to change notification settings - Fork 263
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dog vdi delete leave orphan objects (in healthy cluster) #436
Comments
Please, try with latest master |
Hi, we tried (without cluster, one node only) stable v1.0.1 (sheepdog-1.0.1-1_amd64.deb), problem remains same: # dog cluster format -t -c 1
using backend plain store
# ls -al /var/lib/sheepdog/obj/
total 12
drwxr-x--- 3 root root 4096 Dec 10 09:12 .
drwxr-x--- 4 root root 4096 Dec 10 09:12 ..
drwxr-x--- 2 root root 4096 Dec 10 09:12 .stale
# dog vdi create -P dog001 16M
100.0 % [===] 16 MB / 16 MB
# ls -al /var/lib/sheepdog/obj/
total 16408
drwxr-x--- 3 root root 4096 Dec 10 09:12 .
drwxr-x--- 4 root root 4096 Dec 10 09:12 ..
-rw-r----- 1 root root 4194304 Dec 10 09:12 00f81c0000000000
-rw-r----- 1 root root 4194304 Dec 10 09:12 00f81c0000000001
-rw-r----- 1 root root 4194304 Dec 10 09:12 00f81c0000000002
-rw-r----- 1 root root 4194304 Dec 10 09:12 00f81c0000000003
-rw-r----- 1 root root 12587576 Dec 10 09:12 80f81c0000000000
drwxr-x--- 2 root root 4096 Dec 10 09:12 .stale
# dog vdi delete dog001
# ls -al /var/lib/sheepdog/obj/
total 16
drwxr-x--- 3 root root 4096 Dec 10 09:12 .
drwxr-x--- 4 root root 4096 Dec 10 09:12 ..
-rw-r----- 1 root root 12587576 Dec 10 09:12 80f81c0000000000 # <-------- !!!!
drwxr-x--- 2 root root 4096 Dec 10 09:12 .stale |
Compiled from master (git clone / 3ebe5ea), same problem as v1.0.1 |
@ggrandes 80**** is inode object. It should not be deleted as design. |
@vatelzh why? if no other vdi references objects from this vdi, why it not deleted? |
@vtolstov This is useful in some scenarios. For example, when creating a vdi snapshot, vid is selected right next origin vid in a vdi bitmap which records all vid allocated. And Inodes in disk is the way to know which one was allocated when cluster was shutdown. |
@vatelzh Thanks for the response. I do not doubt that for some scenarios is useful, but in other scenarios, when VDI is no longer used, really orphan files in long term are like a "memory leak". Our future scenario are many ephimeral machines (+500/day), 12Mbytes per VDI, 365 days=2.19Tbytes of space lost (speaking in AWS prices this is around 400USD/month of EBS) in v1.0.1 (note also that this orphan files are x3 more big in v1.0.1 than her brothers of v0.8.3), this is a lot of space. Thinking: can we do a "garbage collector" of some-type? |
As i remember, sheepdog unmaintained for such things, but i'm try to build sheepdog compatible storage system (ceph crushmap for object location, but sheepdog proto for qemu). |
|
Look here This is a typical problem... Sheepdog is not so bad as some people think. This is a great project. |
Maybe.... but I had a simple rule: don't play/joke with storage, If you touch, don't surprise if you break things. |
You have one choise is to use hardware solutions. Open source does not fit into your rule. |
Hi @AnatolyZimin, |
Summary:
dog vdi delete
leaves orphan files in obj directory.Environment:
How reproduce:
The text was updated successfully, but these errors were encountered: