You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've created cryfs container on my HDD which enabled to my PC with USB 3.0 and then did copy some files.
When i try to open directory 1/2/3/4 which contains 170 photos by 10-11 Mb my file manager is stuck on 3-5 minutes and more to load this. When i open images, this is loading very long (about 1 minute and more).
It's just unusable performance. When i use VeraCrypt, all operations with files on my HDD feels immediately.
I did tried to copy container to my SSD and i also tried to create containers with different blocksize (default, 5 Mb, 10 Mb) and results always bad.
I don't know how i can measure this, but performance is very low. I have not bad hardware, so it's not cause as i think.
Is you have some ideas about this problem?
Maybe it's because we use not optimal strategy with use blocks to keep files?
What you think about use chunks of files instead of blocks to keep files? We'll still keep in secret a number of files and their size if will split one file to random number of chunks, but when we'll read files, we will read for example ~10 chunks of file instead of 500 blocks.
The text was updated successfully, but these errors were encountered:
I've created cryfs container on my HDD which enabled to my PC with USB 3.0 and then did copy some files.
When i try to open directory
1/2/3/4
which contains 170 photos by 10-11 Mb my file manager is stuck on 3-5 minutes and more to load this. When i open images, this is loading very long (about 1 minute and more).It's just unusable performance. When i use VeraCrypt, all operations with files on my HDD feels immediately.
I did tried to copy container to my SSD and i also tried to create containers with different blocksize (default, 5 Mb, 10 Mb) and results always bad.
I don't know how i can measure this, but performance is very low. I have not bad hardware, so it's not cause as i think.
Is you have some ideas about this problem?
Maybe it's because we use not optimal strategy with use blocks to keep files?
What you think about use chunks of files instead of blocks to keep files? We'll still keep in secret a number of files and their size if will split one file to random number of chunks, but when we'll read files, we will read for example ~10 chunks of file instead of 500 blocks.
The text was updated successfully, but these errors were encountered: