mpitrampoline

A forwarding MPI implementation that can use any other MPI implementation via an MPI ABI

https://github.com/eschnett/mpitrampoline

Science Score: 77.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 3 DOI reference(s) in README
  • Academic publication links
    Links to: zenodo.org
  • Committers with academic emails
    1 of 5 committers (20.0%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (10.9%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

A forwarding MPI implementation that can use any other MPI implementation via an MPI ABI

Basic Info
  • Host: GitHub
  • Owner: eschnett
  • License: mit
  • Language: Python
  • Default Branch: main
  • Size: 17.7 MB
Statistics
  • Stars: 48
  • Watchers: 8
  • Forks: 4
  • Open Issues: 6
  • Releases: 45
Created over 4 years ago · Last pushed over 1 year ago
Metadata Files
Readme License Citation Zenodo

README.md

MPItrampoline

  • MPI wrapper library: GitHub
CI
  • MPI trampoline library: GitHub
CI
  • MPI integration tests: GitHub
CI
  • DOI

MPI is the de-facto standard for inter-node communication on HPC systems, and has been for the past 25 years. While highly successful, MPI is a standard for source code (it defines an API), and is not a standard defining binary compatibility (it does not define an ABI). This means that applications running on HPC systems need to be compiled anew on every system. This is tedious, since the software that is available on every HPC system is slightly different.

This project attempts to remedy this. It defines an ABI for MPI, and provides an MPI implementation based on this ABI. That is, MPItrampoline does not implement any MPI functions itself, it only forwards them to a "real" implementation via this ABI. The advantage is that one can produce "portable" applications that can use any given MPI implementation. For example, this will make it possible to build external packages for Julia via Yggdrasil that run efficiently on almost any HPC system.

A small and simple MPIwrapper library is used to provide this ABI for any given MPI installation. MPIwrapper needs to be compiled for each MPI installation that is to be used with MPItrampoline, but this is quick and easy.

Successfully Tested

  • Debian 11.0 via Docker (MPICH; arm32v5, arm32v7, arm64v8, mips64le, ppc64le, riscv64; C/C++ only)
  • Debian 11.0 via Docker (MPICH; i386, x86-64)
  • macOS laptop (MPICH, OpenMPI; x86-64)
  • macOS via Github Actions (OpenMPI; x86-64)
  • Ubuntu 20.04 via Docker (MPICH; x86-64)
  • Ubuntu 20.04 via Github Actions (MPICH, OpenMPI; x86-64)
  • Blue Waters, HPC system at the NCSA (Cray MPICH; x86-64)
  • Graham, HPC system at Compute Canada (Intel MPI; x86-64)
  • Marconi A3, HPC system at Cineca (Intel MPI; x86-64)
  • Niagara, HPC system at Compute Canada (OpenMPI; x86-64)
  • Summit, HPC system at ORNL (Spectrum MPI; IBM POWER 9)
  • Symmetry, in-house HPC system at the Perimeter Institute (MPICH, OpenMPI; x86-64)

Workflow

Preparing an HPC system

Install MPIwrapper, wrapping the MPI installation you want to use there. You can install MPIwrapper multiple times if you want to wrap more than one MPI implementation.

This is possibly as simple as sh cmake -S . -B build -DMPIEXEC_EXECUTABLE=mpiexec -DCMAKE_BUILD_TYPE=RelWithDebInfo -DCMAKE_INSTALL_PREFIX=$HOME/mpiwrapper cmake --build build cmake --install build but nothing is ever simple on an HPC system. It might be necessary to load certain modules, or to specify more cmake MPI configuration options.

The MPIwrapper libraries remain on the HPC system, they are installed independently of any application.

Building an application

Build your application as usual, using MPItrampline as MPI library.

Running an application

At startup time, MPItrampoline needs to be told which MPIwrapper library to use. This is done via the environment variable MPITRAMPOLINE_LIB. You also need to point MPItrampoline's mpiexec to a respective wrapper created by MPIwrapper, using the environment variable MPITRAMPOLINE_MPIEXEC.

For example: sh env MPITRAMPOLINE_MPIEXEC=$HOME/mpiwrapper/bin/mpiwrapper-mpiexec MPITRAMPOLINE_LIB=$HOME/mpiwrapper/lib/libmpiwrapper.so mpiexec -n 4 ./your-application The mpiexec you run here needs to be the one provided by MPItrampoline.

Current state

MPItrampoline uses the C preprocessor to create wrapper functions for each MPI function. This is how MPI_Send is wrapped: C FUNCTION(int, Send, (const void *buf, int count, MT(Datatype) datatype, int dest, int tag, MT(Comm) comm), (buf, count, (MP(Datatype))datatype, dest, tag, (MP(Comm))comm))

Unfortunately, MPItrampoline does not yet wrap the Fortran API. Your help is welcome.

Certain MPI types, constants, and functions are difficult to wrap. Theoretically, there could be MPI libraries where it is not possible to implement the current MPI ABI. If you encounter this, please let me know -- maybe there is a work-around.

Owner

  • Name: Erik Schnetter
  • Login: eschnett
  • Kind: user
  • Location: Waterloo, Ontario, Canada
  • Company: Perimeter Institute for Theoretical Physics

Citation (CITATION.cff)

cff-version: 1.1.0
message: "If you use this software, please cite it as below."
authors:
  - family-names: Schnetter
    given-names: Erik
    orcid: https://orcid.org/0000-0002-4518-9017
title: MPItrampoline
version: v5.5.0
doi: 10.5281/zenodo.6174408
date-released: 2024-09-25

GitHub Events

Total
  • Watch event: 4
  • Issue comment event: 2
Last Year
  • Watch event: 4
  • Issue comment event: 2

Committers

Last synced: almost 3 years ago

All Time
  • Total Commits: 328
  • Total Committers: 5
  • Avg Commits per committer: 65.6
  • Development Distribution Score (DDS): 0.024
Past Year
  • Commits: 30
  • Committers: 1
  • Avg Commits per committer: 30.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Erik Schnetter s****r@g****m 320
ocaisa a****s@c****g 3
Alan O'Cais a****s@f****e 2
Harmen Stoppels h****s@g****m 2
Mosè Giordano m****e@g****g 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 7 months ago

All Time
  • Total issues: 14
  • Total pull requests: 35
  • Average time to close issues: 10 days
  • Average time to close pull requests: 2 days
  • Total issue authors: 8
  • Total pull request authors: 4
  • Average comments per issue: 7.29
  • Average comments per pull request: 0.8
  • Merged pull requests: 32
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 1
  • Pull requests: 4
  • Average time to close issues: 29 days
  • Average time to close pull requests: 7 days
  • Issue authors: 1
  • Pull request authors: 2
  • Average comments per issue: 17.0
  • Average comments per pull request: 0.75
  • Merged pull requests: 3
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • ocaisa (6)
  • PhilipVinc (1)
  • freemin7 (1)
  • blue42u (1)
  • nick-wilson (1)
  • simonbyrne (1)
  • spoutn1k (1)
  • mkre (1)
Pull Request Authors
  • eschnett (31)
  • ocaisa (4)
  • haampie (1)
  • giordano (1)
Top Labels
Issue Labels
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads: unknown
  • Total dependent packages: 0
  • Total dependent repositories: 0
  • Total versions: 12
  • Total maintainers: 1
spack.io: mpitrampoline

MPItrampoline: A forwarding MPI implementation that can use any other MPI implementation via an MPI ABI.

  • Versions: 12
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent repos count: 0.0%
Stargazers count: 23.5%
Average: 27.1%
Forks count: 27.5%
Dependent packages count: 57.3%
Maintainers (1)
Last synced: 7 months ago

Dependencies

.github/workflows/CI.yml actions
  • actions/checkout v3 composite
  • actions/checkout v2 composite