You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm having trouble finding concrete information on whether squashfs is designed to handle packing and unpacking large amounts of files with low/constant RAM usage.
I ran mksquashfs on directory with 200 million files, around 20 TB total size.
I used flags -no-duplicates -no-hardlinks; mksquashfs version 4.5.1 (2022/03/17) on Linux x86_64.
It OOM'ed with 53 GB resident memory usage.
Should mksquashfs handle this? If yes, I guess the OOM should be considered a bug.
Otherwise, I'd put it as a feature request, as it would be very nice to have a tool that can handle this.
The text was updated successfully, but these errors were encountered:
This is an interesting request. Back in the early days of Squashfs (from 2002 to about 2006), Mksquashfs did one pass over the source filesystem creating the Squashfs filesystem as it went. This did not require caching any of the source filesystem and so it was very light on memory use.
Unfortunately adding features such as real inode numbers, hard-link support (including inode nlinks) and "." and ".." directories (the first two versions of Squashfs didn't have any of these) requires fully scanning the source filesystem to build an in-memory representation.
This takes memory and so unfortunately 53 GB is probably correct for around 200 million files, and so this is expected and not a bug.
But if someone was happy to forgo hard-link detection and advanced support such as pseudo files and actions, it may be possible to reduce the in-memory representation and move more to the original single pass in a "memory light mode".
I'll add it to the list of enhancements, and see if priorities allow it to be looked at for the next release.
Hi,
I'm having trouble finding concrete information on whether squashfs is designed to handle packing and unpacking large amounts of files with low/constant RAM usage.
I ran
mksquashfs
on directory with 200 million files, around 20 TB total size.I used flags
-no-duplicates -no-hardlinks
;mksquashfs version 4.5.1 (2022/03/17)
on Linux x86_64.It OOM'ed with 53 GB resident memory usage.
Should
mksquashfs
handle this? If yes, I guess the OOM should be considered a bug.Otherwise, I'd put it as a feature request, as it would be very nice to have a tool that can handle this.
The text was updated successfully, but these errors were encountered: