DisCoTec

DisCoTec: Distributed higher-dimensional HPC simulations with the sparse grid combination technique - Published in JOSS (2025)

https://github.com/sgpp/discotec

Science Score: 100.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 6 DOI reference(s) in README and JOSS metadata
  • Academic publication links
    Links to: sciencedirect.com, springer.com, joss.theoj.org, zenodo.org
  • Committers with academic emails
    12 of 25 committers (48.0%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
    Published in Journal of Open Source Software

Keywords

combination-technique higher-dimensional multi-scale simulation sparse-grids vlasov-solver
Last synced: 4 months ago · JSON representation ·

Repository

MPI-based code for distributed HPC simulations with the sparse grid combination technique. Docs->(https://discotec.readthedocs.io/)

Basic Info
  • Host: GitHub
  • Owner: SGpp
  • License: lgpl-3.0
  • Language: C++
  • Default Branch: main
  • Homepage: https://sparsegrids.org/
  • Size: 21.2 MB
Statistics
  • Stars: 8
  • Watchers: 4
  • Forks: 8
  • Open Issues: 12
  • Releases: 4
Topics
combination-technique higher-dimensional multi-scale simulation sparse-grids vlasov-solver
Created about 6 years ago · Last pushed 5 months ago
Metadata Files
Readme Contributing License Code of conduct Citation

README.md

DisCoTec: Distributed Combination Technique Framework

Build Status License: LGPL v3 Zenodo DOI JOSS DOI Latest spack version Codacy Badge

What is DisCoTec?

This project contains DisCoTec, a code for running the distributed sparse grid combination technique with MPI parallelization. While it originates from the excellent SGpp project, all the parallelization makes it a very different code, such that it has become its own project.

DisCoTec is designed as a framework that can run multiple instances of a (black-box) grid-based PDE solver implementation. The most basic example we use is a mass-conserving FDM/FVM constant advection upwinding solver. An example of a separate, coupled solver is SeLaLib.

Sparse Grid Combination Technique with Time Stepping

The sparse grid combination technique (Griebel et al. 1992, Garcke 2013, Harding 2016) can be used to alleviate the curse of dimensionality encountered in high-dimensional simulations. Instead of using your PDE solver on a single structured full grid (where every dimension is finely resolved), you would use it on many different structured full grids (each of them differently resolved). We call these coarsely-resolved grids component grids. Taken together, all component grids form a sparse grid approximation, which can be explicitly obtained by a linear superposition of the individual grid functions, with the so-called combination coefficients.

schematic of a combination scheme in 2D

In this two-dimensional combination scheme, all combination coefficients are 1 and -1, respectively. Figure originally published in (Pollinger 2024).

Between time steps, the grids exchange data through a multi-scale approach, which is summarized as the "combination" step in DisCoTec. Assuming a certain smoothness in the solution, this allows for a good approximation of the finely-resolved function, while achieving drastic reductions in compute and memory requirements.

Parallelism in DisCoTec

The DisCoTec framework can work with existing MPI parallelized PDE solver codes operating on structured grids. In addition to the parallelism provided by the PDE solver, it adds the combination technique's parallelism. This is achieved through process groups (pgs): MPI_COMM_WORLD is subdivided into equal-sized process groups (and optionally, a manager rank).

schematic of MPI ranks in DisCoTec

The image describes the two ways of scaling up: One can either increase the size or the number of process groups. Figure originally published in (Pollinger 2024).

Combining the two ways of scaling up, DisCoTec's scalability was demonstrated on several machines, with the experiments comprising up to 524288 cores:

timings for advection solver step on HAWK at various
parallelizationstimings for combination step on
HAWK at various parallelizations

We see the timings (in seconds) for the advection solver step and the combination step, respectively. This weak scaling experiment used four OpenMP threads per rank, and starts with one pg of four processes in the upper left corner. The largest parallelization is 64 pgs of 2048 processes each. Figure originally published in (Pollinger 2024).

Find a more detailed discussion in the docs.

There are only few codes that allow weak scaling up to this problem size: a size that uses most of the available main memory of the entire system.

When to Use DisCoTec?

If you are using a structured grid PDE solver and want to increase its accuracy while not spending additional compute or memory resources on it, DisCoTec may be a viable option. The codes most likely in this situation are the ones that solve high-dimensional problems and thus suffer the curse of dimensionality, such as the 4-6D discretizations occurring in plasma physics or computational quantum chemistry. But if you have a "normal" 2-3D problem and find yourself resource constrained, DisCoTec could be for you, too! Use its multiscale benefits without worrying about any multiscale yourself 😊

Why not try it with your own PDE solver?

What Numerical Advantage Can I Expect?

Depends on your problem! Figure 3.6 here shows a first-order accurate 2D PDE solver achieving approximately second-order accuracy with the Combination Technique considering the total number of DOF. (Figure omitted due to licensing, first published here.)

When Not to Use DisCoTec?

  1. If memory and/or time constraints are not your limiting factor; you can easily achieve the numerical accuracy you need with your resources.
  2. If your PDE solver just does not fit the discretization constraints imposed by DisCoTec:
    • a rectilinear (or mapped to rectilinear) domain
    • structured rectilinear grids in your main data structure (=typically the unknown function), stored as a linearized array
    • numbers of values per dimension that can be chosen as various powers of two, and where any power of two is a coarsened version of the discretization achieved with the next power of two ("nested discretization").
    • if distributed-memory parallelism is used, it must be MPI
    • currently, DisCoTec does not support Discontinuous Galerkin schemes, but it could be part of future versions (through Alpert multiwavelets). Let us know in case you are interested!

Installing

DisCoTec can be installed via spack, which handles all dependencies. We recommend the spack dev-build workflow:

Clone both spack and DisCoTec to find or build the dependencies and then compile DisCoTec:

```bash git clone git@github.com:spack/spack.git # use https if ssh is not set up on github ./spack/bin/spack external find # find already-installed packages ./spack/bin/spack compiler find # find compilers present on system ./spack/bin/spack info discotec@main # shows DisCoTec's variants

shows DisCoTec's dependency tree and which parts are already found

./spack/bin/spack spec discotec@main

git clone git@github.com:SGpp/DisCoTec.git cd DisCoTec ../spack/bin/spack dev-build -b install discotec@main ```

This will first build all dependencies, and then build DisCoTec inside the cloned folder. The executables are placed in the respective example and test folders.

To use DisCoTec in another CMake project, you can then add the line

CMake add_subdirectory(DisCoTec/src)

in your project's CMake files.

Here are Docs for CMake options and further Spack customization hints.

Read The Full Documentation

DisCoTec documentation here!

SeLaLib public source and SeLaLib documentation

For the current GENE documentation, you need to apply for access at genecode.org.

Owner

  • Name: SG++ development team
  • Login: SGpp
  • Kind: organization

JOSS Publication

DisCoTec: Distributed higher-dimensional HPC simulations with the sparse grid combination technique
Published
February 26, 2025
Volume 10, Issue 106, Page 7018
Authors
Theresa Pollinger ORCID
RIKEN Center for Computational Science (R-CCS), Kobe, Japan, University of Stuttgart, Scientific Computing, Stuttgart, Germany
Marcel Hurler
University of Stuttgart, Scientific Computing, Stuttgart, Germany
Alexander Van Craen ORCID
University of Stuttgart, Scientific Computing, Stuttgart, Germany
Michael Obersteiner
Technical University of Munich, Chair of Scientific Computing, Munich, Germany
Dirk Pflüger ORCID
University of Stuttgart, Scientific Computing, Stuttgart, Germany
Editor
Daniel S. Katz ORCID
Tags
MPI structured grid-based simulations sparse grids black-box solvers Vlasov solvers massively parallel

Citation (CITATION.cff)

cff-version: "1.2.0"
authors:
- family-names: Pollinger
  given-names: Theresa
  orcid: "https://orcid.org/0000-0002-0186-4340"
- family-names: Hurler
  given-names: Marcel
- family-names: Craen
  given-names: Alexander Van
  orcid: "https://orcid.org/0000-0002-3336-7226"
- family-names: Obersteiner
  given-names: Michael
- family-names: Pflüger
  given-names: Dirk
  orcid: "https://orcid.org/0000-0002-4360-0212"
contact:
- family-names: Pollinger
  given-names: Theresa
  orcid: "https://orcid.org/0000-0002-0186-4340"
doi: 10.5281/zenodo.14920617
message: If you use this software, please cite our article in the
  Journal of Open Source Software.
preferred-citation:
  authors:
  - family-names: Pollinger
    given-names: Theresa
    orcid: "https://orcid.org/0000-0002-0186-4340"
  - family-names: Hurler
    given-names: Marcel
  - family-names: Craen
    given-names: Alexander Van
    orcid: "https://orcid.org/0000-0002-3336-7226"
  - family-names: Obersteiner
    given-names: Michael
  - family-names: Pflüger
    given-names: Dirk
    orcid: "https://orcid.org/0000-0002-4360-0212"
  date-published: 2025-02-26
  doi: 10.21105/joss.07018
  issn: 2475-9066
  issue: 106
  journal: Journal of Open Source Software
  publisher:
    name: Open Journals
  start: 7018
  title: "DisCoTec: Distributed higher-dimensional HPC simulations with
    the sparse grid combination technique"
  type: article
  url: "https://joss.theoj.org/papers/10.21105/joss.07018"
  volume: 10
title: "DisCoTec: Distributed higher-dimensional HPC simulations with
  the sparse grid combination technique"

GitHub Events

Total
  • Create event: 6
  • Commit comment event: 1
  • Release event: 1
  • Issues event: 12
  • Delete event: 8
  • Issue comment event: 39
  • Push event: 77
  • Pull request event: 16
  • Pull request review comment event: 8
  • Pull request review event: 10
  • Fork event: 2
Last Year
  • Create event: 6
  • Commit comment event: 1
  • Release event: 1
  • Issues event: 12
  • Delete event: 8
  • Issue comment event: 39
  • Push event: 77
  • Pull request event: 16
  • Pull request review comment event: 8
  • Pull request review event: 10
  • Fork event: 2

Committers

Last synced: 5 months ago

All Time
  • Total Commits: 2,685
  • Total Committers: 25
  • Avg Commits per committer: 107.4
  • Development Distribution Score (DDS): 0.35
Past Year
  • Commits: 11
  • Committers: 3
  • Avg Commits per committer: 3.667
  • Development Distribution Score (DDS): 0.364
Top Committers
Name Email Commits
Theresa Pollinger t****r@i****e 1,744
Marcel Hurler m****3@g****m 395
Obersteiner o****i@i****e 148
Alexander Van Craen A****n@i****e 129
Theresa Pollinger p****a@i****e 98
Mario Heene m****e@i****e 72
Keerthi Gaddameedi k****i@g****m 20
Marvin Dostal d****n@i****e 20
Daniel Pfister p****l@i****e 14
Marcel Breyer m****r@i****e 11
Christoph Niethammer n****r@h****e 7
Mario Heene i****o@e****e 6
Johannes Rentrop r****p@i****e 3
ge25duq g****q@m****e 3
Theresa Pollinger t****r@r****p 3
Mario Heene i****o@e****e 2
Mario Heene i****o@e****e 2
Daniel Pfister p****l@s****e 1
Mario Heene i****o@e****e 1
Mario Heene i****o@e****e 1
Mario Heene i****o@e****e 1
Mario Heene i****o@e****e 1
Obersteiner o****i@a****e 1
Theresa Pollinger p****2@j****s 1
Emily Bourne l****e@g****m 1

Issues and Pull Requests

Last synced: 4 months ago

All Time
  • Total issues: 22
  • Total pull requests: 131
  • Average time to close issues: about 2 months
  • Average time to close pull requests: about 1 month
  • Total issue authors: 3
  • Total pull request authors: 12
  • Average comments per issue: 2.77
  • Average comments per pull request: 0.37
  • Merged pull requests: 107
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 7
  • Pull requests: 17
  • Average time to close issues: about 2 months
  • Average time to close pull requests: 14 days
  • Issue authors: 2
  • Pull request authors: 5
  • Average comments per issue: 5.0
  • Average comments per pull request: 0.29
  • Merged pull requests: 15
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • freifrauvonbleifrei (10)
  • EmilyBourne (9)
  • jakelangham (4)
Pull Request Authors
  • freifrauvonbleifrei (101)
  • vancraar (10)
  • cniethammer (7)
  • ge25duq (4)
  • codacy-badger (3)
  • EmilyBourne (2)
  • danielskatz (2)
  • jakelangham (2)
  • PhilippOffenhaeuser (1)
  • breyerml (1)
  • datMaffin (1)
  • obersteiner (1)
Top Labels
Issue Labels
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads: unknown
  • Total dependent packages: 0
  • Total dependent repositories: 0
  • Total versions: 0
  • Total maintainers: 2
spack.io: discotec

This project contains DisCoTec, a code for the distributed sparse grid combination technique with MPI parallelization.

  • Versions: 0
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent repos count: 0.0%
Forks count: 28.2%
Average: 28.6%
Stargazers count: 29.6%
Dependent packages count: 56.4%
Maintainers (2)
Last synced: 4 months ago