You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I am trying to run the system via: mpirun -n 4 toolkits/cc ../datasets/google/graph.bin 875713
However, it does not behave like it is running using 4 threads as I would have expected, as it is slower than directly running: toolkits/cc ../datasets/google/graph.bin 875713 (I think this is the single-thread case, no?)
Any suggestion on running using mpi, instead of slurm, as suggested. It is a bit of headache installing slurm on our cluster.
Thank you very much.
The text was updated successfully, but these errors were encountered:
Hi @longbinlai , mpirun -n 4 means using 4 processes (probably on a cluster, or on a single machine, depending on your environment and configuration) to run. Each Gemini process will try to use all the hardware threads it sees. So running it directly should launch one process which uses all the hardware threads on the machine. Running Gemini with mpirun should be no different from running other MPI + OpenMP programs on the cluster.
Does it mean if I am running on a cluster, I just run 1 thread on each machine, aka mpirun -npernode 1 toolkits/cc .. . In addition, is it possible to make it configurable the number of threads each machine can run? If possible, could you please instruct me how to modify the codes? Thank you very much.
Gemini adopts the MPI + OpenMP approach, so it is best if you can configure your cluster to launch 1 process on each machine, giving each process all the cores available on the machine. I suggest you ask your colleagues to help you with this as MPI configuration can be very different and flexible.
Hi, I am trying to run the system via:
mpirun -n 4 toolkits/cc ../datasets/google/graph.bin 875713
However, it does not behave like it is running using 4 threads as I would have expected, as it is slower than directly running:
toolkits/cc ../datasets/google/graph.bin 875713
(I think this is the single-thread case, no?)Any suggestion on running using mpi, instead of slurm, as suggested. It is a bit of headache installing slurm on our cluster.
Thank you very much.
The text was updated successfully, but these errors were encountered: