You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dear Federico,
Following up on a similar memory issue as the one already posted by other user, I followed your recommendations on creating a DelayedArray class for my SCE assay (14k genes x 227k cells):
I submitted this code as job script to a large-memory node (768 Gb) from my cluster, and it stopped after ~30 min with the SGE exit status #37: "failed 37 : qmaster enforced h_rt, h_cpu, or h_vmem limit", reaching a maxvmem of 487 Gb, while I've asked for 720Gb.
I've also got info from my IT regarding intermediate files ( 'sharedObjectCounter', 'SO_X64_1', 'SO_X64_2', ... 'SO_X64_24') left in the /dev/shm due to the unfinished status of my newWave run.
I wonder if you would have any tip to circumvent this issue, and please let me know if you need more info.
Many thanks in advance.
Elton
The text was updated successfully, but these errors were encountered:
Dear Federico,
Following up on a similar memory issue as the one already posted by other user, I followed your recommendations on creating a DelayedArray class for my SCE assay (14k genes x 227k cells):
My SCE object
Transforming the "batch" field from colData to a factor
Converting the assay (counts) to a DelayedArray
Running newWave
I submitted this code as job script to a large-memory node (768 Gb) from my cluster, and it stopped after ~30 min with the SGE exit status #37: "failed 37 : qmaster enforced h_rt, h_cpu, or h_vmem limit", reaching a maxvmem of 487 Gb, while I've asked for 720Gb.
I've also got info from my IT regarding intermediate files ( 'sharedObjectCounter', 'SO_X64_1', 'SO_X64_2', ... 'SO_X64_24') left in the /dev/shm due to the unfinished status of my newWave run.
I wonder if you would have any tip to circumvent this issue, and please let me know if you need more info.
Many thanks in advance.
Elton
The text was updated successfully, but these errors were encountered: