Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When remove huge data, write and read file become very slow. #901

Open
skyjally opened this issue Dec 11, 2020 · 0 comments
Open

When remove huge data, write and read file become very slow. #901

skyjally opened this issue Dec 11, 2020 · 0 comments

Comments

@skyjally
Copy link

Dear Developer:
Recently, I used Lizardfs for our data storage server to handle tens of millions pictures. When chunk server start to remove data, write file and read file become very slow.
Liardfs version: 3.12.0
meta and chunk server in the sample machine:
cpu:Intel(R) Xeon(R) Silver 4116 CPU @ 2.10GHz
mem:256G
meta and chunk use the default config.
I tried to change CHUNKS_SOFT_DEL_LIMIT and CHUNKS_HARD_DEL_LIMIT, it seems not improve write and read performance.
Here are the Lizardfs status:

image

   Is there any suggestion to handle this situation?

   Best regards!
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant