New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Overestimating available RAM #25
Comments
That's unexpected behavious. eg.
|
More importantly though, you should not be using Velvet anymore. There are far superior tools for Illumina genome assembly. Look at Shovill, Spades, Minia, Abyss, Megahit, IDBA etc. |
I just noticed you said "BAM files". Is that why you are trying to use Velvet? |
Hi Torsten, Thanks so much for your reply. I'm running these through an SGE scheduler so I've rerun it (in case available RAM varies by which machine it gets sent to) and this is the free -h output and then velvet optimiser:
Thanks also for the further advice. I would ordinarily use SPAdes but we have thousands of genomes (from years ago) assembled with velvet and I need to assemble ~1000 more Illumina sequences that weren't assembled at the time (so in order to be consistent). If converting to fastq will fix the memory estimation issue/possible issue (I'm not sure) then I can do that. Best, Matt |
Hi,
VelvetOptimiser is overestimating how much RAM I have available (and being killed on the server as a result). I have 192Gb and it's reporting that I have 485Gb of Current free RAM. is there a way to limit memory usage? I attempted to use the genome size estimation flag but that doesn't work from BAM files
Thanks in advance for any help with this!
The text was updated successfully, but these errors were encountered: