Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Very large memory usage #103

Open
fgvieira opened this issue Jul 13, 2021 · 1 comment
Open

Very large memory usage #103

fgvieira opened this issue Jul 13, 2021 · 1 comment

Comments

@fgvieira
Copy link

fgvieira commented Jul 13, 2021

I'm runnning svaba to call somatic SV with a matched blood on 9 samples, like:

svaba run --threads 30 --reference-genome GCA_000001405.15_GRCh38_no_alt_analysis_set.fna --blacklist blacklist.bed --tumor-bam tumor.cram --normal-bam normal.cram --id-string temp/tumor

However, I am seeing quite a large memory usage with pestat, in some cases close to 500Gb:

Netload file /var/tmp/netload age: 143 seconds, dated 2021-07-13 14:21:45
Node      state  load    pmem ncpu   mem   resi usrs tasks NetMbit jobids/users
i-04-f0010 free    20* 1445383  40 1461767 403430  2/2    1     30    32760182
i-04-f0011 free    19* 1445383  40 1461767 530744  2/2    1     69    32760181
i-04-f0012 free    21* 1445383  40 1461767 194031  2/2    1      0    32760180
i-04-f0013 free    21* 1445383  40 1461767 261501  2/2    1     27    32760185
i-04-f0014 free    26* 1445383  40 1461767 167457  2/2    1     24    32760184
i-04-f0015 free    24* 1445383  40 1461767 229948  2/2    1     29    32760179
i-04-f0016 free    22* 1445383  40 1461767 389336  2/2    1     24    32760178
i-04-f0017 free    23* 1445383  40 1461767 212210  2/2    1     26    32760183
i-04-f0024 free    25* 1547850  40 1564234 383459  2/2    1     66    32760186

Is this normal? According to svaba paper, it should use very little memory. Is there anything I can do to optimize memory usage?

@tanubrata
Copy link

I think the reason is because you are using CRAM files. I recently had this issue and my slurm jobs were getting killed because of memory exhaustion.

slurmstepd: error: Detected 1 oom-kill event(s) in StepId=40907031.batch cgroup. Some of your processes may have been killed by the cgroup out-of-memory handler.

I have tried with different resource options and still was getting this issue. When I switched to BAM files on the same samples that were failing, it works perfectly fine even with less resources. Hope this helps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants