-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for unit testing #143
Comments
Hi. I am also interested in this. I'm wondering if a simpler solution is possible. Consider segregating MPI tests into their own test crate and using a test harness (check out this article) which calls
Following the approach of the second option (following the linked article) would look something like struct Test<'a> {
name: String,
test: Box<dyn Fn(&'a Universe)>;
}
fn main() {
let universe = mpi::initialize().unwrap();
// Assemble tests in a `Vec<Test<'a>>` somehow ...
for test in tests.into_iter() {
(test.test)(&universe)
}
} Finally, the tests must be executed sequentially and not on individual threads as is the default behavior of |
How do we get a parallel job in your model? Using I'm familiar with test harnesses and tinkered with them to some extent. I'm a big fan of nextest, which is way faster than standard |
Not sure. It's desirable to have an interface like On the other hand, if it's possible for the test harness to call |
The interface would be |
I would like to enable MPI unit testing and doctesting, with IDE integration and command-line tooling. I found the rusty-fork library, which I think is a viable strategy. Unfortunately, it is unmaintained, and thus probably better to fork into an MPI-equipped testing crate. Also, nextest is fast, produces nice output, and supports tests that use a different number of slots, thus parallelize while preventing/limiting oversubscription. In all cases, we need a macro that integrates with
#[test]
.How to spawn
mpiexec
, which we need to identify (heuristics/configuration/environment variable) and handle any special options. Cannot give us an intercommunicator for collating results. This is the best supported approach as far as MPI implementations are concerned.MPI_Comm_spawn
to launch the parallel job. This allows the caller to interact with the parallel job via an intercommunicator. Unfortunately, spawning from a singleton init is not well supported by MPI implementations. For example, MPICH needs the environment to be carefully crafted because it just runs the firstmpiexec
fromPATH
. Implementations could make this a reliable approach, with better performance than using external job launchers and without sensitivity to environment.rusty-fork
does becausefork()
isn't portable to Windows) beforeMPI_Init
. UseMPI_Open_port
and send the port to the child (e.g., overstdin
); useMPI_Comm_accept
on the parent andMPI_Comm_connect
on the child. This creates an intercommunicator similar toMPI_Comm_spawn
. Support for this feature requires more environment shenanigans, such as runningompi-server
as a daemon and crafting environment variables to interact with the server. I think this also doesn't solve problems for us at present, but it could if MPI implementations avoided the need for these external channels.Collective assertions
Standard
assert_eq!(left, right)
and friends can leave MPI processes hanging, and don't produce nice output in case they diverge. I think a collectivecoll_assert_eq!(comm, left, right)
that collates output is desirable. This could be implemented using an intercommunicatorMPI_Gather
if we used theMPI_Comm_spawn
model above, but this is likely not feasible with current implementations.Cc: @jtronge @hppritcha
The text was updated successfully, but these errors were encountered: