New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Broken multi file archive #96
Comments
There is not much that can be done: the zpaq format, for backward compatibility reasons, does not support the possibility of having “holes” (i.e., corrupted archive parts). In zpaqfranz to mitigate (not solve, mitigate) the problem I added the backup command
If you want to kill you should try Control-C. This will be intercepted and (hopefully!) some housekeeping done Of course, it is not possible to prevent “brutal” termination from resulting in data corruption. BEWARE: use fullpath with backup command!TRANSLATION: z:\ugo\apezzi is good, apezzi is NOT good (it is a feature 😄 ) Default hash is MD5, I suggest using -backupxxh3 if you do not need a "manual" MD5 check (aka: heztner storageboxes)
In this example you'll get
Now test quick (not very realiable)
Corruption test (-ssd for solid state media, on HDD do NOT use!)
Double check
|
OK, now we corrupt the archive
The piece 2 is now KO
|
Thank your for your answer. Can you help me with creating proper zpaqfranz arguments sets? I'm currently using such approach, but it looks like this is error prone and not a good idea for regular backup (real path names are different): I'm also using second approach for metadata backup which contains many MB and poor compressible data: In general most important thing is:
To minimize problems I have also plan to (can you help with building the commands ?):
|
-m5 is placebo-level compression, and will try to compress even uncompressable data (until -m4 uncompressabile data is stored). -filelist is not useful, in your case, because it is a non-ADS (NTFS) filesystem Then my suggest is just
(more on testing in next posts) |
You really do not need to use -m0, unless you REALLY take encrypted file or higly compressed (.MP4 etc) For example, making the backup of a TrueCrypt volume a -m0 is appropriate |
It depends on whether you want to use multivolume or monolithic archive For multivolumeI suggest backup. It work just like regular multivolume BUT with a textfile with hashes AND For monolithicI suggest the t (test) command after an add, plus (if you can) -paranoid or the w command (if you have enough ram)
This can be a good example, with a rsync-based remote-cloud backup (aka hetzner storagebox) Just a snipped, adjust as you like
The idea is
There is none in zpaq (more on later) |
The "right" way to do the tests depends on whether they are LOCAL or REMOTE. For LOCALS.
|
For REMOTE I put an example above. |
Thank you very much for your comprehensive answer :) I will adapt and use your suggestions. Btw - is it possible to tweak progress ? I'm thinking on two things - first is that progress is stuck almost always on some percents, second thing - I'm parsing output from stdout and converting to cronicle-edge JSON format - but maybe there is a possible way to add such output format (like -pakka) |
In fact no easily, it is already carefully "tweaked"
There is the fzf command, not really sure if it is enough |
I have a daily zpaq file creation. Three times compression process was killed because of too long execution time.
For me this was fine, because I accidentally put big file into backuped folder. Problem occurs that after this point new created archives are somehow invalid and can't currently extract - in theory - created archives without any errors.
Whole archive contains files from 0001.zpaq to 0044.zpaq. One for each day.
When I execute zpaqfranz i "brainapp????.zpaq"_ results shows only version from 1 to 26 (27, 28 and 29 version was interrupted by kill command).
When I'm trying to extract particular file from 0040.zpaq file I have error: "2 bad frag IDs, skipping..." and after few minutes, zpaq exit and nothing is extracted.
I was trying to trim those three files - in the result "info" command show list up to 44, but still not possible to extract any file.
Any idea what to do next and maybe zpaq should be improved to not fail in such situation?
The text was updated successfully, but these errors were encountered: