Skip to content

pratikvn/schwarz-lib

Repository files navigation

Schwarz Library

Build status Documentation

Performance results

  1. Paper in IJHPCA; Alternative arXiv version
  2. Two stage update

Required components

The required components include:

  1. Ginkgo: The Ginkgo library is needed. It needs to be installed and preferably the installation path provided as an environment variable in Ginkgo_DIR variable.
  2. MPI: As multiple nodes and a domain decomposition is used, an MPI implementation is necessary.

Quick Install

Building Schwarz-Lib

To build Schwarz-Lib, you can use the standard CMake procedure.

mkdir build; cd build
cmake -G "Unix Makefiles" .. && make

By default, SCHWARZ_BUILD_BENCHMARKING is enabled. This allows you to quickly run an example with the timings if needed. For a detailed list of options available see the Benchmarking page.

For more CMake options please refer to the Installation page

Currently implemented features

  1. Executor paradigm:
  • GPU.
  • OpenMP.
  • Single rank per node and threading in one node.
  1. Factorization paradigm:
  • CHOLMOD.
  • UMFPACK.
  1. Solving paradigm:
  • Direct:
  • Ginkgo.
  • CHOLMOD.
  • UMFPACK.
  • Iterative:
  • Ginkgo.
  • deal.ii.
  1. Partitioning paradigm:
  • METIS.
  • Regular, 1D.
  • Regular, 2D.
  • Zoltan.
  1. Convergence check:
  • Centralized, tree based convergence (Yamazaki 2019).
  • Decentralized, leader election based (Bahi 2005).
  1. Communication paradigm.
  • Onesided.
  • Twosided.
  • Event based.
  1. Communication strategies.
  • Remote comm strategies:
    • MPI_Put , gathered.
    • MPI_Put , one by one.
    • MPI_Get , gathered .
    • MPI_Get , one by one.
  • Lock strategies: MPI_Win_lock / MPI_Win_lock_all .
    • Lock all and unlock all.
    • Lock local and unlock local.
  • Flush strategies: MPI_Win_flush / MPI_Win_flush_local .
    • Flush all.
    • Flush local.
  1. Schwarz problem type.
  • RAS.
  • O-RAS.

Any of the implemented features can be permuted and tested.

Known Issues

  1. On Summit, the Spectrum MPI seems to have a bug with using MPI_Put with GPU buffers. MPI_Get works as expected. This bug has also been confirmed with an external micro-benchmarking library, OSU Micro-Benchmarks.

For installing and building, please check the Installation page

Credits: This code (written in C++, with additions and improvements) was inspired by the code from Ichitaro Yamazaki, ICL, UTK.