New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
records: download all button #210
Comments
Can likely be achieved via AIP. Related to #34. |
+1 A possibility you could consider is for the user to "request download all" to initiate compression, and then send notification (by email) with a download link of the zip file when compression is done. You could make the link expire in 24 hours and then release the storage. |
Another solution might be to stream all the files inside a zip using a library such as https://github.com/SpiderOak/ZipStream. No extra temporary disk storage will be required and (hopefully, if the library works as advertised) no extra memory. Of course maybe a combination of the mentioned methods might be best (if [total size of files] > 2GB, send email with link to async generated zip, otherwise use ZipStream). Edit: Another way is triggering multiple downloads via JavaScript. This library seems to do it: https://github.com/sindresorhus/multi-download |
I just uploaded a dataset of a few hundred files, and am shocked that there is no end user "download all button" giving a ZIP or tar-ball or similar (which could be generated on the fly). In hindsight, I should have uploaded an archive myself, but the upload interface didn't give explicit guidance, and was clearly designed to cope with multiple files. Is the current work around to instead upload a single archive (e.g. [Update: Given the dataset has not been shared yet, I have used https://zenodo.org/support to ask about replacing the files] |
Yes, the current workaround is to upload a ZIP (better, and previewed - tar.gz is not previewed). The problem here is that we have TB-sized datasets, and thus making a "download all" button is not trivial if it needs to scale. |
An automatic zipping for smaller dataset (at the upload or even later) would then solve most problems, right? It would also save storage space ? |
I would also really appreciate this -- we have several files in our archives so users have the option of grabbing only the data they need, but many folks want all of it, and it's tedious to have to click every single file. Or is the general expectation that folks just upload one big zip file as an archive? |
may I suggest including some guidance in the upload user interface. where it currently says:
There could be a note such as the following: Note: in case the dataset contains more than a few files, please consider packing them in a zipfile, to facilitate download by the user. |
btw, there is zenodo_get, a downloader for Zenodo records: |
It seems to me to be a good idea as long as there is no solution to "download all". |
What about integrating that tool into the Zenodo interface? |
FYI, there is also an R package to do it (and many other things): https://github.com/eblondel/zen4R |
I can see why this might be quite difficult to implement on the server side. However, I reckon (if you have a recent enough browser), a "Download All" button that generates a zip could be implemented entirely on the client side using this small JavaScript library: https://github.com/Touffy/client-zip If Zenodo's admins don't want to implement it officially, an enthusiastic user could probably make a bookmarklet that implements it. |
Hi, it's not that we don't want to implement it. It's rather that its a task that likely takes ~2-3 weeks from start to launch and we're heavily resource constrained. As part of optimising resources, we're moving Zenodo on top of the InvenioRDM platform, so if you willing to help out, we're be more than happy to engage with you on how to implement it in InvenioRDM. We do have partners in InvenioRDM that's interested in the same feature, so any help would be much appreciated. |
No description provided.
The text was updated successfully, but these errors were encountered: