Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Hierarchical compression. #675

Open
Hello1024 opened this issue Mar 14, 2022 · 6 comments
Open

Add Hierarchical compression. #675

Hello1024 opened this issue Mar 14, 2022 · 6 comments
Assignees
Labels
Milestone

Comments

@Hello1024
Copy link

ZIM currently allows data to be arranged in clusters, which are then compressed. Large clusters give better compression, while small clusters allow faster access to a random article.

In a future version of the ZIM file format, this tradeoff could be substantially improved by allowing clusters to have an extra field specifying "parent cluster for compression context".

To decode a given cluster, the decoder would first have to decode the parent cluster, and then decode the cluster it needs, using the same instance of the data decompressor (so internal state is maintained, and therefore a better ratio is achieved).

This change means that, for wikipedia_nb_50000_nopic_2021-05.zim:

  • For the same compressed file size, access times should be able to be reduced by 55%
  • Keeping data decompression time and algorithm the same (ie. still total decompressed 2MB per accessed URL), the compressed size of all the data can be reduced by 2.5% (using a naive single-parent algorithm).

There is also the opportunity to cache decompressor states for commonly used parents, substantially reducing access times to multiple documents.

Is there interest in taking this approach?

@Hello1024
Copy link
Author

Update: I have re-run the analysis with a smarter 'tree' of clusters, still never decompressing more than 2MB total to access any blob, and we now get a 7.1% decrease in file size. Alternatively, for a fixed file size, the access speed is now over 4x faster (needing to only decode a maximum of 420kb, vs the 2Mb baseline).

Do these numbers make it worth adding this complexity?

@kelson42 kelson42 added this to the 7.3.0 milestone Mar 15, 2022
@kelson42
Copy link
Contributor

kelson42 commented Mar 15, 2022

I’m interested to this approach. To me it looks similar to what we discussed a few years ago with #76.

What is unclear to me is the way to choose the parent cluster for each cluster… in a way that ultimatively it goes faster to decompress in most (if not all) of the scenarios.

@Hello1024
Copy link
Author

My latest approach (which is just a pile of python to try out ideas - nowhere near implementation stage) constructs a tree with the following logic:

Put the documents in a tree, in 'pre-order traversal' order. Ie. like this. I added the documents in alphabetical order, nothing smart.

Each route from root to tip is capped at 2 megabytes. Each node is limited to 4 children. The root node may have more children.

The benefit of this approach is the whole lot can be written out sequentially with no substantial ram requirements.

When reading, you never need to decode more than 2MB.

This approach might not be the best - just the first that came to mind.

@mgautierfr
Copy link
Collaborator

This is definitively interesting. Do you have some code (even quick and dirty poc) showing how you regroup the content in clusters and how they are decompressed ?

I see few potential issues with this approach:

  • We will need to the "parent cluster" to decompress a child one (but with smaller cluster, we decompress the same size than previously)
  • old libzim would not be able to read new zim files.

But the numbers you are providing worth at least we discuss further on this and see what we can do.


I slowly working on a new archive format (https://framagit.org/jubako), I will probably integrate your idea in this format.

@Hello1024
Copy link
Author

Here is my quick and dirty POC which compresses and decompresses: Adjust the compression level on line 20 - most benefits are seen at higher levels, but obviously that's slow for testing.

https://gist.github.com/Hello1024/9cc20300205c6aec214519cf84bbebb3

@Hello1024
Copy link
Author

One potential disadvantage of this approach...

While random access is very fast, decompressing all the data sequentially becomes slow with API's available in libzstd or liblzma.

That is because, if one wants to decompress a single entry, one simply decompresses each node from the root to the node you care about. However, if you want to decompress another node, you theoretically have already decompressed the root, so there is no need to decompress that again... But... With the API available in zstd, there is no way to get the decompressor back into the state it was in after decompressing the root node, without decompressing the root node again.

Theoretically, all thats needed is an API to 'snapshot' the internal state of the decompressor. However, while such an API appears to exist(ZSTD_copyCCtx()), the comments in the source code show it cannot be used when data has already passed through the decompressor/compressor... Making it not useful for our uses.

Considering that ZIM seems to mostly require random read performance and a high compression ratio, this tradeoff seems acceptable. Especially since the tradeoff is not a theoretical one, but merely an implementation detail which could be fixed if really necessary.

@kelson42 kelson42 modified the milestones: 7.3.0, 7.4.0 Apr 12, 2022
@kelson42 kelson42 modified the milestones: 8.3.0, 9.0.0 Jul 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants