Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pixz isn't using all cores during decompression #71

Open
shmerl opened this issue Aug 15, 2017 · 3 comments
Open

pixz isn't using all cores during decompression #71

shmerl opened this issue Aug 15, 2017 · 3 comments

Comments

@shmerl
Copy link

shmerl commented Aug 15, 2017

I just tired comrpessing a bit apitrace file (13 GB), which got compressed to around 1GB. During compression all cores are used, but I noticed that during decompression, only a small portion of cores are used, and many remain idle. I'm running the test on Ryzen 7 1700X, so it has 8 cores and 18 threads available (hyperthreading).

Is decompression hard limited in how many cores it uses?

@vasi
Copy link
Owner

vasi commented Aug 15, 2017

There shouldn't be any hard limit, but usually writing is slower than reading—so it's possible that's the bottleneck?

@shmerl
Copy link
Author

shmerl commented Aug 15, 2017

That could be the case, I'm using an HDD. I can get some SSD for a test.

On a side note, may be writing can be optimized with some buffering, to avoid the bottleneck? I.e. use more RAM for parallel writes in it, before writing out big chunk to disk in one go? That would minimize parallel disk access.

@babam86
Copy link

babam86 commented Jan 30, 2018

For level 8 and 9 compression, PIXZ does not give any advantage when decompressing.
I have compared PIXZ with XZ.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants