Skip to content

MTTOverview

Howard Pritchard edited this page May 6, 2022 · 7 revisions

MTT Overview

Note this wiki has references to the legacy perl MTT but is still relevant, in parts, to the python based MTT client.

MTT is mainly an engine that drives several phases of operation. Each phase performs a distinct set of operations, and is designed to be logically indpenedent of other phases. Each phase has Perl plugins to perform the majority of the work; the MTT client mainly provides the infrastructure, frameworks, and backing data store for the plugins to interact with each other.

The phases are:

  1. MPI Get: In this phase, an MPI implementation is obtained. It can be downloaded from the internet (http/s, ftp, Git clone, etc.), or copied (local tarball or directory tree).
  2. MPI Install: The MPI implementation is installed (source MPI distributions must be built before being installed).
  3. Test Get: Similar to MPI Get, this phase obtains test suites with which to test each MPI. Test suites are likely to be in source form so that they can be compiled against each MPI installation.
  4. Test Build: The test suites obtained in the Test Get phase are compiled against all successfully installed MPI implementations.
  5. Test Run: The test suites that were successfully built in the Test Build phase are run.
  6. Reporter: This is currently a "pseudo" phase, because it is actually executed during and after most of the above phases (as appropriate). Results are reported, typically to stdout, files, or to a central database.

Phases are effectively templated to allow multiple executions of each phase based on parameterization. For example, you can specify a single MPI implementation, but have MTT compile it against both the GNU and Intel compilers. MTT will automatically track that there is one MPI source, but two installations of it. Every test suite that is specified will therefore be compiled and run against both MPI installations, and their results filed accordingly. Hence, the MTT gives a multiplicitive effect. A simplistic view:

  • M MPI implementations are specified
  • I installations of each MPI implementation are specified
  • A total of (M * I) installations are created (assuming all are successful)
  • T test suites are specified, each of which is compiled against the (M * I) MPI installation
  • R different run parameters are specified for each test suite
  • A total of (T * R * M * I) tests are run.

Hence, you must be careful not to specify too much work to MTT -- it will happily do all of it, but it may take a long, long time!

Note: MTT takes care of all PATH and LD_LIBRARY_PATH issues when building and installing both MPI implementations and test suites. There is no need for the user to setup anything special in their shell startup files (at least for Open MPI -- when we move to other MPI implementations, we may need to enforce some special mojo in shell startup files to set paths properly in non-resource-manager-controlled environments [i.e., where rsh/ssh are used]).

The Big Picture

The following graphic is a decent representation of the relationships of the phases to each other, and the general sequence of phases (click on the image to see a bigger version). It shows two example MPI implementations (open MPI and MPICH), but any MPI implementation could be used (even multiple versions of the same MPI implementation):

Configuring MTT

The MTT client is an executable named "mtt". It is configured by an INI-style file that is specified on the command line:

shell$ mtt --file my_config_file.ini ...

The most current sample of the INI file is in the Git repository (https://github.com/open-mpi/mtt/blob/master/samples/perl/ompi-core-template.ini). The file is split up into sections for each phase plus a global parameters section. In classic INI-file style, sections are denoted with strings inside brackets and parameters are specified as "key = value" pairs. For example:

# Lines beginning with "#" are comments
[This is a section]
git_url = https://github.com/open-mpi/ompi.git

[This is another section]
git_url = https://github.com/open-mpi/ompi.git
# Lines can be continued with shell-like "\" notation:
value = this is a really \
      long line
# Very long values can use shell-like << notation:
another_value = <<EOF
This value is multiple lines.
Woo hoo!
EOF

Note that the INI file sections and parameters are unordered. This means that changing the sequence of sections or parameters within sections has no effect.

General INI parameter notes

  • The global section is named "MTT" (and is therefore denoted with "[MTT]" in the INI file). It currently accepts a small number of parameters; see the comments in the ompi-core-template.ini file for descriptions of what these fields are and how they are used.
  • Phase sections are identified by their phase name followed by a colon followed by an arbitrary identifier string (leading and trailing white space is ignored). For example: "[MPI Get: OMPI nightly trunk]" is an MPI Get section named "OMPI nightly trunk" (all valid phase names are listed earlier on this page).
    • All sections not matching this pattern are ignored
    • INI sections can be filtered using the --[no]-section options (see wiki:SectionFilter).
    • Or to hardcode the section filter into the ini file, simply insert any arbitrary string before the phase name in the section (e.g., "[SKIP MPI Get: OMPI nightly trunk]"). This will cause MTT to skip this section.
  • Any INI parameter can be overriden at the command line by just supplying a field=value assignment. This avoids the need for creating numerous INI files that are virtually identical (see INIOverrides).
  • Each phase takes a different set of parameters. See the ompi-core-template.ini file for what parameters are accepted at each phase. Most phases accept a "module" parameter (e.g., "module = name"). The value of the module parameter needs to be a Perl module name in the MTT tree for that particular phase. Specifically, the module parameter denotes which Perl plugin is used to execute the heavy lifting logic for that phase.
  • Each module also takes its own parameters. Parameters for modules are prefixed with the module's name so that it is obvious that they belong to that module.
  • Hence, a phase is comprised of phase-specific parameters, the designation of a module, and then module-specific parameters.
  • Special sections denoted by the name "MPI Details" (i.e., "[MPI Details: some name]") are used to tell MTT how to run executables under that MPI. More details on this below

Phase-specific parameter notes

  • MPI Get
    • Specify as many MPI implementations as you want. Be aware that each of these will automatically be used to compile and run your test suites (which takes time), so don't include an arbitrary number.
  • MPI Install
    • Since each MPI Get section will potentially download a different MPI implementation (and therefore require a different installation process), you have to tell MPI Build sections which MPI Get section(s) to build.
    • For example, if you specify multiple MPI Get sections to download different versions of Open MPI, you have to tell the MPI Install section(s) to which MPI Get sections to install.
    • Here's an INI file example that shows downloading two different versions of Open MPI and compiling both of them with three different compilers (GNU, Intel, PGI):
[MPI Get: OMPI nightly trunk]
# This "mpi_details" is a forward reference -- see below for details on its meaning
mpi_details = Open MPI
module = OMPI_Snapshot
ompi_snapshot_url = http://www.open-mpi.org/nightly/trunk

[MPI Get: OMPI v1.2 snapshot tarballs]
mpi_details = Open MPI
module = OMPI_Snapshot
ompi_snapshot_url = http://www.open-mpi.org/nightly/v1.2

[MPI Install: GNU compilers]
# A comma-delimited list of MPI Get sections to install
mpi_get = OMPI nightly trunk,OMPI v1.2 snapshot tarballs
make_all_arguments = -j 4
compiler_name = gnu
# Note the "&" notation below; we'll explain that in the "Funclets" section, below
compiler_version = &shell("gcc --version | head -n 1 | awk '{ print \$3 }'")
configure_arguments = --enable-picky --enable-debug
module = OMPI

[MPI Install: Intel compilers]
mpi_get = OMPI nightly trunk,OMPI v1.2 snapshot tarballs
make_all_arguments = -j 4
compiler_name = intel
# Funclets are described below
compiler_version = &join("&shell("icc --version | head -n 1 | awk '{ print \$3 \}'")", " ", "&shell("icc --version | head -n 1 | awk '{ print \$4 }'")")
configure_arguments = CC=icc CXX=icpc F77=ifort FC=ifort CFLAGS=-g --enable-picky --enable-debug
module = OMPI

[MPI Install: PGI compilers]
mpi_get = OMPI nightly trunk,OMPI v1.2 snapshot tarballs
make_all_arguments = -j 4
compiler_name = pgi
# Funclets are described below
compiler_version = ("&shell("pgcc -V | head -n 2 | tail -n 1 | awk '{ print \$2 \}'")
configure_arguments = CC=pgcc CXX=pgCC F77=pgf77 FC=pgf90 CFLAGS=-g --enable-picky --enable-debug
module = OMPI
  • Note that this INI file will produce 6 MPI installations: it will install the OMPI trunk 3 times (one each for the GNU, Intel, and PGI compilers) and it will install the OMPI v1.2 snapshots 3 times (ditto). This is the multiplicative effect of MTT.
  • The MPI Install phase accepts several notable parameters:
    • mpi_get: A comma-delimited list of MPI Get section names to install
    • make_all_arguments: I like parallel builds, and Open MPI is capable of using them. I find that this speeds up MTT execution times significantly.
    • compiler_name: This field is simply for searchability of results in the database. Currently accepted names are: gnu, pgi, intel, ibm, kai, absoft, pathscale, sun.
    • compiler_version: Since the version of the compiler can have a lot to do with the results, this field is included in the result data. We use "funclets" to obtain the compiler version; see the Funclets section below for more of a description.
    • configure_arguments: Arguments given to the MPI's configure script.
  • There are other fields as well; see the ompi-core-template.ini file for examples.
  • NOTE: Several of these fields are a break in the abstraction in that it they assume that all MPI's will be built from source. As such, some of these parameters may move into the OMPI plugin someday (e.g., "make_all_arguments" and friends).
  • Test Get
    • This phase is pretty much the same as the MPI Get phase, except that it is for getting MPI test suites, not MPI implementations.
  • Test Build
    • Every MPI that successfully passes the MPI Get and MPI Install phases is paired with every Test suite that successfully passes the Test Get section (another multiplicative effect).
    • Hence, every test suite is built against every MPI install.
    • A small number of modules are available to build Test suites; some are specific to the test suite (e.g., "Trivial" and "Intel_OMPI_Tests"), where others are more generic ("Shell").
  • Test Run
    • The Test Run section only denotes which tests should be run. It does not actually run them (this phase is somewhat poorly named and will likely be renamed in the future). The MTT engine takes the list of tests to run and pushes them through its testing engine to actually execute them and generate results.
    • Similar to the relationship between MPI Install and MPI Get sections, Test Run sections need to refer back to which Test Build sections they are linked with.
    • There is currently only one Test Run module named "Simple", but it is quite flexible. Here is an example Test Run phase to specify the Intel tests to run:
[Test run: intel]
# Assumedly there is a [Test Build: intel] section elsewhere in this INI file
test_build = intel
# Specify the conditions for tests the pass.  In this example, if the exit status of the
# executable is 0 or 77, the test is ruled to pass.
pass = &or(&eq(&test_exit_status(), 0), &eq(&test_exit_status(), 77))
# Specify how long this test has to run.  In this case, it 30 seconds or
# 10*number_of_processes, whichever is greater.
timeout = &max(30, &multiply(10, &test_np()))
# &env_max_procs() returns how many processes the current environment can run
# (e.g., under a resource manager, or the number of slots specified in a hostfile).
# But the intel tests have some hard limits at 64 processes.  So this sets the
# number of processes to be the minimum between 64 or the number allowed by
# the environment.
np = &min(64, &env_max_procs())

# Use the "Simple" module
module = Simple
# Find any executables that we compiled by the corresponding [Test run: intel] section
simple_tests = &find_executables("src")
  • MTT users will likely want to tune the timeout value for their environment based on CPU speed, interconnect speed, number of processes, etc.

Funclets

Funclets are mini-functions invoked from the INI file. Their syntax is quite Perl-like. The idea is to give significant flexability within the INI file to make it both templatable and re-usable in a variety of scenarios. In general, their format is:

  &funclet_name(comma-delimited parameter list)

Funclets return values which can be used as parameters to other funclets. In the examples of INI file sections shown on this page, you can see a lot of nesting of funclets to obtain various values. A good example to discuss is obtaining the Intel compiler's version number:

compiler_version = &join("&shell("icc --version | head -n 1 | awk '{ print \$3 \}'")", " ", "&shell("icc --version | head -n 1 | awk '{ print \$4 }'")")

Notice the two calls to &shell(). This funclet is similar to the system() C and Perl functions; it forks and execs its argument in a shell, so any shell syntax is valid. The stdout of the result is returned as the value of the funclet. To see how the above example works, first look at the output of "icc --version":

shell$ icc --version
icc (ICC) 9.0  20051201
Copyright (C) 1985-2005 Intel Corporation.  All rights reserved.

We want the "9.0" and "20051201" values from the first line. In this case, we use some shell magic to get them, via head and awk. Unfortunately, there is currently a hard rule that quotes cannot be double nested in funclets, so we have to obtain these two numbers via two calls to &shell(). However, we also want the final output to have a space between the two values, so we surround the two calls to &shell() with a call to &join(), which simply joins its N arguments (&join() can take any number of arguments) together in a final string. Hence, in this example, &join() joins its three arguments:

  • 9.0
  • a space
  • 20051201

The result is the string "9.0 20051201".

Note that the real power of funclets is not simply in obtaining values from the shell. Funclets can be used to expand a parmeter to be an array of values. For example, consider this np value from a Test Run section:

np = &pow(2, 0, &log(2, &env_max_procs()))

The funclet &env_max_procs() will return the maximum number of processes allowed in this environment (e.g., SLURM, PBS, or a hostfile/hostlist). The &pow() funclet takes 3 parameters; the &log() funclet takes 2 parameters:

&pow(base, min_exponent, max_exponent)
&log(base, value)

&log() returns a scalar value; &pow returns an array of values from basemin_exponent to basemax_exponent. So in the above "np" example, if MTT was running in a SLURM job of 60 processes, np would be an array of the following values: 1, 2, 4, 8 16, 32. Note that since we were only running with 60 processes (not 64), the sequence stopped at 32.

Assigning an array of values to np means that the MTT engine will run every test for each value of np. This can be a massive multiplicative effect!

There is currently no list or documentation of all the funclets that are available. The ompi-core-template.ini file provides many good examples. For the truly adventerous, all the funclets are implemented in a single Perl file in the MTT source; see source:branches/ompi-core-testers/lib/MTT/Values/Functions.pm.

Variable Substitution

The argument to each parameter in the INI file is evaluated before its value is used. For example, funclets are executed. Variable substitution is also allowed, meaning that one parameter can contain the value(s) of another pameters. This is best shown through example:

exec = mpirun -np &test_np() --mca btl self,@btls@ &test_executable() &test_argv()
btls = &enumerate("tcp", "openib")

Ignore the funclets in the exec parameter for the moment and note the "@btls@" token. This tells MTT to substitute in exec the value of the "btls" parameter. btls, in turn, has a funclet which returns an array of two strings ("tcp" and "openib"). Hence, the value of the exec parameter is going to be two strings, once which contains "--mca btl self,tcp" and one that contains "--mca btl self,openib".

The "MPI Details" section(s)

The MPI Details section tells MTT how to run an executable with a particular MPI implementation. Each MPI Get section needs to specify an MPI Details section that describes how to run executables for that MPI. In this way, by the time MTT gets all the way down to running tests, it knows how to invoke arbitrary executables with each MPI implementation. For example, consider this MPI Details section for Open MPI:

[MPI Details: Open MPI]
exec = mpirun @hosts@ -np &test_np() --mca btl self,@btls@ --prefix &test_prefix() &test_executable() &test_argv()

btls = &enumerate("tcp", "openib")
hosts = &if(&have_hostfile(), "&join("--hostfile ", "&hostfile()")", \
            "&if(&have_hostlist(), "&join("--host ", "&hostlist()")", "")")

All the funclets that begin with "&test" return essentially what you would expect:

  • &test_np(): the value from the np parameter of the particular Test Run section that is being executed
  • &test_prefix(): the installation prefix for the MPI
  • &test_executable(): the name of the executable under test
  • &test_argv(): any argv associated with the test

Note that as described above, np may be an array of values. If it is, MTT will execute the test for each value in the array.

Note that we have two variable substituions in the exec parameter -- @btls@ and @hosts@. The @btls@ example was explained above, but note that in this example, each BTL will be used for each value of np. So if np has the array of values {1, 2, 4, 8, 16, 32}, this will actually trigger 12 runs for each test because each np value is paried with "tcp" and then "openib". This can be another massive multiplicitive effect!

The @hosts@ substitution is used to effect the hostfile and hostlist values from the MTT globals section in an Open MPI-specific way (the &if() funclet takes 3 parameters: &if(expression, value_if_true, value_if_false)). That is, if a hostfile is specified, @hosts@ will return "--hostfile ". If a hostlist is specified, @hosts@ will return "--host ". All the quoting is necessary because of the perl-like evaluation rules -- each return value must be considered as a string (not a number) so that it can be combined with other values to result in a final string (as opposed to a number).

A word of warning

The various multiplicative effects described above are all intentional, and were implemented in MTT to provide a high degree of flexibility.

However, MTT users need to be cautious and ensure to not create an INI file that will take days (or weeks!) to complete. Use the "--print-time" option to the MTT client to see how long each phase is taking to help tune your INI file.