Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

more RAM will be required when use more CPUs? #369

Open
SilentGene opened this issue Aug 9, 2019 · 1 comment
Open

more RAM will be required when use more CPUs? #369

SilentGene opened this issue Aug 9, 2019 · 1 comment

Comments

@SilentGene
Copy link

Hi, while I understand pplacer requires a lot of memory because it caches likelihood vectors for all of the internal nodes, I'm still confused why it needs more memory when I try to use more CPUs (only tested on Linux). It looks like when I specify a double number of CPUs, the memory requirements will be also doubled, which doesn't make sense. Anyone knows how to deal with this problem? It really matters when trying to work with a huge reference tree. Thanks.

@aaronmussig
Copy link

I've done some digging to understand what is happening here, but from my understanding (and correct me if I'm wrong): it doesn't.

It appears that the main thread forks, which uses Unix.fork. Since the memory is copy-on-write, the children only have a mapping to main thread memory space.

However, this is not what is reported in the OS. From my experience, as long as you've got enough memory to run the main thread, you can launch as many children as you want (so long as it's under 64 as I've had instances of it hanging if more are used).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants