You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I'm running a Brainiak searchlight (sl_rad = 3, max_blk_edge = 10, pool_size = 1) and getting errors from mpi4py (trace below). The searchlight loads 253GB of data. I'm using 3 Tiger cluster nodes (120 ranks in total) and the job uses 840GB of memory across all 3 nodes. The log shows an mpi4py error, a quick Google search suggests that this error crops up when the size of the pickled object is >2GB (https://githubmemory.com/repo/mpi4py/mpi4py/issues/119). Any suggestions for solving this issue would be much appreciated!
Traceback (most recent call last):
File "../notebooks/batch/delta_searchlight.py", line 899, in
main()
File "../notebooks/batch/delta_searchlight.py", line 880, in main
sl.distribute(full_set, sl_mask)
File "../.conda/envs/brainiak11/lib/python3.7/site-packages/brainiak/searchlight/searchlight.py", line 370, in distribute
for (s_idx, s) in enumerate(splitsubj)]
File "../.conda/envs/brainiak11/lib/python3.7/site-packages/brainiak/searchlight/searchlight.py", line 370, in
for (s_idx, s) in enumerate(splitsubj)]
File "../.conda/envs/brainiak11/lib/python3.7/site-packages/brainiak/searchlight/searchlight.py", line 310, in _scatter_list
mytrans = self.comm.scatter(padded, root=owner)
File "mpi4py/MPI/Comm.pyx", line 1267, in mpi4py.MPI.Comm.scatter
File "mpi4py/MPI/msgpickle.pxi", line 730, in mpi4py.MPI.PyMPI_scatter
File "mpi4py/MPI/msgpickle.pxi", line 125, in mpi4py.MPI.Pickle.dumpv
File "mpi4py/MPI/msgbuffer.pxi", line 44, in mpi4py.MPI.downcast
OverflowError: integer 2157576948 does not fit in 'int'
The text was updated successfully, but these errors were encountered:
From Gitter:
Hello, I'm running a Brainiak searchlight (sl_rad = 3, max_blk_edge = 10, pool_size = 1) and getting errors from mpi4py (trace below). The searchlight loads 253GB of data. I'm using 3 Tiger cluster nodes (120 ranks in total) and the job uses 840GB of memory across all 3 nodes. The log shows an mpi4py error, a quick Google search suggests that this error crops up when the size of the pickled object is >2GB (https://githubmemory.com/repo/mpi4py/mpi4py/issues/119). Any suggestions for solving this issue would be much appreciated!
Traceback (most recent call last):
File "../notebooks/batch/delta_searchlight.py", line 899, in
main()
File "../notebooks/batch/delta_searchlight.py", line 880, in main
sl.distribute(full_set, sl_mask)
File "../.conda/envs/brainiak11/lib/python3.7/site-packages/brainiak/searchlight/searchlight.py", line 370, in distribute
for (s_idx, s) in enumerate(splitsubj)]
File "../.conda/envs/brainiak11/lib/python3.7/site-packages/brainiak/searchlight/searchlight.py", line 370, in
for (s_idx, s) in enumerate(splitsubj)]
File "../.conda/envs/brainiak11/lib/python3.7/site-packages/brainiak/searchlight/searchlight.py", line 310, in _scatter_list
mytrans = self.comm.scatter(padded, root=owner)
File "mpi4py/MPI/Comm.pyx", line 1267, in mpi4py.MPI.Comm.scatter
File "mpi4py/MPI/msgpickle.pxi", line 730, in mpi4py.MPI.PyMPI_scatter
File "mpi4py/MPI/msgpickle.pxi", line 125, in mpi4py.MPI.Pickle.dumpv
File "mpi4py/MPI/msgbuffer.pxi", line 44, in mpi4py.MPI.downcast
OverflowError: integer 2157576948 does not fit in 'int'
The text was updated successfully, but these errors were encountered: