Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Overestimating available RAM #25

Open
moorembioinfo opened this issue Aug 7, 2019 · 4 comments
Open

Overestimating available RAM #25

moorembioinfo opened this issue Aug 7, 2019 · 4 comments
Assignees
Labels

Comments

@moorembioinfo
Copy link

Hi,

VelvetOptimiser is overestimating how much RAM I have available (and being killed on the server as a result). I have 192Gb and it's reporting that I have 485Gb of Current free RAM. is there a way to limit memory usage? I attempted to use the genome size estimation flag but that doesn't work from BAM files

Thanks in advance for any help with this!

@tseemann tseemann self-assigned this Aug 7, 2019
@tseemann
Copy link
Owner

tseemann commented Aug 7, 2019

That's unexpected behavious.
What does free -h say?

eg.

free -h
              total        used        free      shared  buff/cache   available
Mem:           377G        9.4G        3.2G         27M        365G        367G
Swap:          390G          0B        390G

@tseemann
Copy link
Owner

tseemann commented Aug 7, 2019

More importantly though, you should not be using Velvet anymore. There are far superior tools for Illumina genome assembly. Look at Shovill, Spades, Minia, Abyss, Megahit, IDBA etc.

@tseemann
Copy link
Owner

tseemann commented Aug 7, 2019

I just noticed you said "BAM files". Is that why you are trying to use Velvet?
You can extract the reads from a BAM using samtools fastq if you need to.

@moorembioinfo
Copy link
Author

moorembioinfo commented Aug 8, 2019

Hi Torsten,

Thanks so much for your reply. I'm running these through an SGE scheduler so I've rerun it (in case available RAM varies by which machine it gets sent to) and this is the free -h output and then velvet optimiser:

          total        used        free      shared  buff/cache   available
          Mem:           376G         71G        302G        753M        2.5G        300G
          Swap:           15G          0B         15G

          VelvetOptimiser.pl Version 2.2.6
          Number of CPUs available: 24
          Current free RAM: 603.032GB

Thanks also for the further advice. I would ordinarily use SPAdes but we have thousands of genomes (from years ago) assembled with velvet and I need to assemble ~1000 more Illumina sequences that weren't assembled at the time (so in order to be consistent).


If converting to fastq will fix the memory estimation issue/possible issue (I'm not sure) then I can do that.

Best,

Matt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants