Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Object size not reducing after delete files from bucket #1198

Open
gauravrishi168 opened this issue Nov 8, 2019 · 3 comments
Open

Object size not reducing after delete files from bucket #1198

gauravrishi168 opened this issue Nov 8, 2019 · 3 comments

Comments

@gauravrishi168
Copy link

Hi
I am using Leofd version 1.3.3. using S3cmd command, i removed all most of the heavy file from the bucket but while doing df -HT, the size of the partition is showing the same.
Compaction was also enable.

active number of objects: 114459
total number of objects: 314918
active size of objects: 560646011442
total size of objects: 1428424558539
ratio of active size: 39.25%
last compaction start: 2019-11-08 21:35:33 +0550
last compaction end: 2019-11-08 21:51:11 +0550

Can you please help how we can reduce the size of the object.

@gauravrishi168 gauravrishi168 changed the title Object size not reducing after delete files from buket Object size not reducing after delete files from bucket Nov 8, 2019
@yosukehara
Copy link
Member

I delayed to reply.
Do you know that errors have occurred in the LeoStorage node?
The error log of LeoStorage is output to the following location:

Destination of log file(s) of LeoStorage
Default: ./log/app

@gauravrishi168
Copy link
Author

not getting any serious error: However, i have restarted the gateway, storage but no luck. Can you please confirm if we can delete the object like .avs

[root@pgp-leofs1 app]# tail -100 error.20191108.23.1
[W] storage_0@10.143.1.70 2019-11-08 23:02:58.612885 +0550 1573234378 leo_compact_fsm_worker:execute_1/4 1082 [{obj_container_path,"/home_leofs/avs/object/6.avs_63662924913"},{error_pos_start,118253588384},{error_pos_end,118253588480},{errors,[{invalid_format,unexpected_time_format},{976,"invalid data"}]}]
[W] storage_0@10.143.1.70 2019-11-08 23:17:51.837719 +0550 1573235271 leo_compact_fsm_worker:execute_1/4 1082 [{obj_container_path,"/home_leofs/avs/object/7.avs_63662924913"},{error_pos_start,120516298304},{error_pos_end,120516435968},{errors,[{invalid_format,unexpected_time_format},{invalid_format,over_limit},{976,"invalid data"}]}]
[root@pgp-leofs1 app]# tail -1000 error.20191108.23.1
[W] storage_0@10.143.1.70 2019-11-08 23:02:58.612885 +0550 1573234378 leo_compact_fsm_worker:execute_1/4 1082 [{obj_container_path,"/home_leofs/avs/object/6.avs_63662924913"},{error_pos_start,118253588384},{error_pos_end,118253588480},{errors,[{invalid_format,unexpected_time_format},{976,"invalid data"}]}]
[W] storage_0@10.143.1.70 2019-11-08 23:17:51.837719 +0550 1573235271 leo_compact_fsm_worker:execute_1/4 1082 [{obj_container_path,"/home_leofs/avs/object/7.avs_63662924913"},{error_pos_start,120516298304},{error_pos_end,120516435968},{errors,[{invalid_format,unexpected_time_format},{invalid_format,over_limit},{976,"invalid data"}]}]

Output of MQ


root@pgp-leofs1 app]# leofs-adm mq-stats storage_0@10.143.1.70
id | state | number of msgs | batch of msgs | interval | description
--------------------------------+-------------+----------------|----------------|----------------|---------------------------------------------
leo_delete_dir_queue | idling | 0 | 1600 | 500 | remove directories
leo_comp_meta_with_dc_queue | idling | 0 | 1600 | 500 | compare metadata w/remote-node
leo_sync_obj_with_dc_queue | idling | 0 | 1600 | 500 | sync objs w/remote-node
leo_recovery_node_queue | idling | 0 | 1600 | 500 | recovery objs of node
leo_async_deletion_queue | idling | 0 | 1600 | 500 | async deletion of objs
leo_rebalance_queue | idling | 0 | 1600 | 500 | rebalance objs
leo_sync_by_vnode_id_queue | idling | 0 | 1600 | 500 | sync objs by vnode-id
leo_per_object_queue | idling | 0 | 1600 | 500 | recover inconsistent objs


*********************** DIsk Space************************
/dev/mapper/VolGroup-lv_home
ext4 2.1T 603G 1.4T 31% /home


****************************Du detail output ****************
root@pgp-leofs1 app]# leofs-adm du detail storage_0@10.143.1.70
[du(storage stats)]
file path: /home_leofs/avs/object/0.avs
active number of objects: 14477
total number of objects: 14616
active size of objects: 70897287668
total size of objects: 71597082156
ratio of active size: 99.02%
last compaction start: 2019-11-11 11:36:00 +0550
last compaction end: 2019-11-11 12:12:46 +0550
duration: 2206s
result: success

            file path: /home_leofs/avs/object/1.avs

active number of objects: 14465
total number of objects: 14602
active size of objects: 70832495438
total size of objects: 71528678462
ratio of active size: 99.03%
last compaction start: 2019-11-11 11:36:00 +0550
last compaction end: 2019-11-11 12:11:57 +0550
duration: 2157s
result: success

            file path: /home_leofs/avs/object/2.avs

active number of objects: 14529
total number of objects: 14655
active size of objects: 71285231513
total size of objects: 71915537550
ratio of active size: 99.12%
last compaction start: 2019-11-11 11:36:00 +0550
last compaction end: 2019-11-11 12:07:25 +0550
duration: 1885s
result: success

            file path: /home_leofs/avs/object/3.avs

active number of objects: 14495
total number of objects: 14590
active size of objects: 70824125928
total size of objects: 71311732780
ratio of active size: 99.32%
last compaction start: 2019-11-11 12:07:25 +0550
last compaction end: 2019-11-11 12:39:25 +0550
duration: 1920s
result: success

            file path: /home_leofs/avs/object/4.avs

active number of objects: 14603
total number of objects: 14722
active size of objects: 71789658917
total size of objects: 72397567781
ratio of active size: 99.16%
last compaction start: 2019-11-11 12:11:57 +0550
last compaction end: 2019-11-11 12:49:21 +0550
duration: 2244s
result: success

            file path: /home_leofs/avs/object/5.avs

active number of objects: 14629
total number of objects: 14735
active size of objects: 71654188492
total size of objects: 72192465024
ratio of active size: 99.25%
last compaction start: 2019-11-11 12:12:46 +0550
last compaction end: 2019-11-11 12:50:07 +0550
duration: 2241s
result: success

            file path: /home_leofs/avs/object/6.avs

active number of objects: 14583
total number of objects: 14696
active size of objects: 71302827203
total size of objects: 71889676315
ratio of active size: 99.18%
last compaction start: 2019-11-11 12:39:25 +0550
last compaction end: 2019-11-11 13:06:35 +0550
duration: 1630s
result: success

            file path: /home_leofs/avs/object/7.avs

active number of objects: 14523
total number of objects: 14646
active size of objects: 71387603268
total size of objects: 72027259495
ratio of active size: 99.11%
last compaction start: 2019-11-11 12:49:21 +0550
last compaction end: 2019-11-11 13:12:53 +0550
duration: 1412s
result: success


@yosukehara
Copy link
Member

I am using LeoFS version 1.3.3. using S3cmd command

We fixed some data-compaction issues to this day:

If you'd like to fix this problem, you need to consider migrating your LeoFS to the latest version.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants