New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue when fetching large database #31
Comments
Hi @JejM, thanks for reporting this. Sorry to hear that you have trouble creating the database you want. In general there is no limitation on the database size from
Let me know if it is another error that we can work on to fix, otherwise feel free to add your 👍 to #16 to increase its priority. |
Thanks for the feedback. Reading other issues properly would have prevented the duplicate, apologies. When I re-ran the fetch again, but only taking 1 replicate of every taxon, it finished correctly. So, as you said, it is most likely related to network issues with NCBI. With a little 'luck', fetching large databases can still work with the current version of bcdatabaser. It is definitely the most comfortable method out there. Thank you for this. |
@JejM You can also download this dataset: https://zenodo.org/record/3339029#.Xc6XPC1oTKg there is a full ITS2 plant dataset already deposited that has been generated with the BCdatabaser and the web default settings |
When trying to collect the whole ITS2 database for Viridiplantae, the process breaks at batch 22300. It broke at different batches during previous trials (e.g. 23600). Possibly this is outside the scope of BCdatabaser and it cannot allow such a large download
Specifics:
Ubuntu 18.04.3
BCdatbaser is run through docker and set up according to instructions
primer file is identical to the one provided here (i.e. Sickel et al. 2015)
attached the log file: bcdatabaser.log
The text was updated successfully, but these errors were encountered: