Task scheduling customization with MPIPoolExecutor #481
-
Hi again, I guess the answer to the following question will be "No, you're just trying to use the wrong tool for your problem" -- fundamentally because I'm after data-parallelism, not task-parallelism -- but I'll try anyway :) Long story short: I would need Context: In my library, I will perform several I looked into mpi4py and I see the task scheduling is handled by the executor's underlying Any thoughts or is there something I've missed in mpi4py that I could rather use ? Thanks (again!) |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 4 replies
-
In the master branch, mpi4py has gained some support for the execution of parallel talks, and that has some resemblance to what you are looking after. However, I think you will not be able to control where exactly each task runs. I mean, mapping Unfortunately, I don't think subclassing mpi4py.futures stuff will help you with anything. The scheduler is hidden deep in the implementation and uses a rather functional style of programming that is not amenable to the sort of hacks that you would like to inject. I will think a bit whether I can improve the parallel task implementation to support your use case, but I'm a bit skeptical I'll be able to do so without disrupting or penalizing the primary focus of the library.
Perhaps you should move away of What you are trying to do should be quite easy to implement using MPI spawn. 1) You spawn children processes, 2) you broadcast the callable function from one side to the other, 3) you send and recv input data with Do you have some previous experience with MPI such that you can implement the thing I just described? |
Beta Was this translation helpful? Give feedback.
I'll start backwards...
Yes I do, thanks
What you say makes a lot of sense. Just before reading your reply, I managed to achieve what I needed, but admittedly it's with some horrible hacks and unsure how robust it is... though it seems to be doing the trick for now.
The gist of it (pseudocode) is the following: