New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slow, although it is just checking timestamps #454
Comments
At the moment, the metadata for the files and directories is not cached, but loaded (and decrypted) from the repository. This is done once per directory. I'm planning to cache metadata locally, which is not yet implemented but should speed up "incremental" backups a lot. |
Hi Could this also cause poor performance for incremental backups over a slow WAN connection? I just backed up a folder with something over 9000 files and 250MB to a remote s3 server. Both computers are connected with an asymmetrical internet connection of 50/5 mbit/s down and up. The initial backup took about 5 minutes and seemed pretty reasonable. But a second backup shortly after that took almost twice as long! A folder with less files seems to be much faster. |
Yes, this will most likely be the reason. For this particular use case there's a workaround: use the |
Thank you very much! Works like a charm! |
We've added a local metadata cache (see #1040) in the master branch, I think this issue is resolved and therefore I'm closing it. Thanks! |
I have 164809 files to backup regularly (about 60GB)... Every time I run "restic backup" the report doesn't go beyond 33MB/s and checking with strace it's only doing lstat() calls.
This renders about 20 minutes per backup. I wonder what is restic doing, because being almost all files unmodified, it shows the steady 33MB/s and I understand that it only needs to lstat() them, which is exactly what restic already does at the first step of backup just to show the total size, in 6 or 7 seconds.
Is it just the cpu time spent in checking the contents for that same file/timestamp is already present in a restic previous snapshot?
The text was updated successfully, but these errors were encountered: