Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can I directly run it using mpi #19

Open
longbinlai opened this issue May 19, 2020 · 3 comments
Open

Can I directly run it using mpi #19

longbinlai opened this issue May 19, 2020 · 3 comments

Comments

@longbinlai
Copy link

Hi, I am trying to run the system via:
mpirun -n 4 toolkits/cc ../datasets/google/graph.bin 875713

However, it does not behave like it is running using 4 threads as I would have expected, as it is slower than directly running:
toolkits/cc ../datasets/google/graph.bin 875713 (I think this is the single-thread case, no?)

Any suggestion on running using mpi, instead of slurm, as suggested. It is a bit of headache installing slurm on our cluster.

Thank you very much.

@coolerzxw
Copy link
Member

Hi @longbinlai ,
mpirun -n 4 means using 4 processes (probably on a cluster, or on a single machine, depending on your environment and configuration) to run. Each Gemini process will try to use all the hardware threads it sees. So running it directly should launch one process which uses all the hardware threads on the machine. Running Gemini with mpirun should be no different from running other MPI + OpenMP programs on the cluster.

@longbinlai
Copy link
Author

Does it mean if I am running on a cluster, I just run 1 thread on each machine, aka mpirun -npernode 1 toolkits/cc .. . In addition, is it possible to make it configurable the number of threads each machine can run? If possible, could you please instruct me how to modify the codes? Thank you very much.

@coolerzxw
Copy link
Member

Gemini adopts the MPI + OpenMP approach, so it is best if you can configure your cluster to launch 1 process on each machine, giving each process all the cores available on the machine. I suggest you ask your colleagues to help you with this as MPI configuration can be very different and flexible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants