-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make restore resumeable. #407
Comments
This is certainly possible and a good idea, thanks for raising the issue. During restore, we already have all information we need (metadata and content-chunk lists for the files). |
Perhaps, as a failsafe, restore should deny to restore if the target directory is not clean/empty, unless the operator explicitly wants this behaviour with --resume, e.g. I have another use case for some very similar behavior, but I would call it "revert" or "rollback". (My use-case: The fact that restic does not semantically distinguish between full and incremental backup and the clever chunking make it perfect for saving states of virtual machine disk files). In a less-than catastrophic situation, I might want to rollback to yesterdays state quickly without having to erase the files first, then download and restore them entirely. A "revert" or "rollback" to an older snapshot would be different to a "restore --resume" in that it would expect some of the data to be already in place, create a snapshot if required and delete new files after they were backed up. |
I am on a spotty network connection sometimes, and the ability to reliably restart a restore from s3 would be great. |
This will be nice, and now that we are getting a cache the fundamental building blocks are starting to be in place! #1040 |
I am currently trying to restore my backup. Unfortunately the server connection is VERY spotty so I can't restore it since every time the connection drops, the restore procedure starts all over again. |
I also interested on this feature, hope you can implement this soon |
I have backed up a 10+TB volume to Backblaze B2 which took about a week to upload. Now I'm unsure how I can restore this reliably. I'm curious what the current status of this issue is and what could be possibly done to get this issue resolved soon. |
@trustin That's a huge backup, to be sure. I wonder if you could write a script to restore a few files at a time. The script would run restic repeatedly on each small set of files until they were all restored. It might be a slow, ugly workaround, but it might be reliable. |
As a workaround it's possible to |
@alphapapa @dionorgua Thanks for suggestions. Let me try them once my current restoration session fails. |
That's such a good idea, I wonder, is that just plain better than running |
I expect performance to be worse, and in some cases, much worse, when using |
By the way, I'm restoring on Windows, so I'll have to spawn a virtual machine and do a Samba mount. |
I'm curious if this would work:
I'm asking this because:
|
This will work for sure. But keep in mind that you'll need to transfer whole repository with all data (all snapshots) because you don't know which files are needed to restore whatever you need. PS. There is some ongoing work on improving restore performance: #1719 |
This is not perfect, as it doesn't resume byte-by-byte, but when there is 1-5 files in a segment or something like it, it shouldn't be an issue. Other solutions welcome, I'm currently not in the position of implementing this. @trustin that does not probably work. What is a workaround for now is to mount the backup and use rclone to copy from mount to local. |
I think this is a key feature for large backups. I will try the rsync+mount option when my backup completes, it should work for me. Another possibility might be to use something like a vpn that can handle an intermittent connection between the server and client - while making it look like there is a continuously working network with at most large pings sometimes. (You may have to adjust timeout parameters or somesuch). I have some experience with tinc working like this. So there is a question of which over head - mount, or network, is better for you. #353 suggests the rclone backend might have some resiliency features - does this apply to restore as well? |
In restic 0.10.0 that was released yesterday there are a few serious improvements to mount browsing performance. |
I'd love this funcitonality as well. I'm very surprised it isn't already the default! I hope #3425 can get merged soon! In lieu of that, I used restic mount (described here) and rclone sync (described here) and it seems to work quite well - only downloading files that are different, and removing ones that don't exist in the restic repo. |
Note that |
@aawsome yeah, fair enough. But I'm less concerned about resuming partial restores (which, for me and most others, should never really be a big or common issue) as I am about skipping the countless whole files that are sure to already exist in every restore operation. The current behaviour is an enormous waste of bandwidth and time. |
I'd be interested in resumes as well. Especially with large backup files where the connection could be interrupted before completing. |
I just tested restore and noticed, that I (maybe) can't download all by data from my sftp source until the next DSL reconnect. When reconnecting, the connection of course breaks and restore fails.
When I start a new restore to resume, all old files are overwritten once again.
It would be nice to be able to resume restore, for example be checking if already in the restore-dir present files are already the same as those which would be restored and therefore be skiped to save time.
For now, this is only a test to check restore, so there is no need for quick hot-fixes like restoring seperate dirs with include/exclude.
The text was updated successfully, but these errors were encountered: