GridapDistributed
GridapDistributed: a massively parallel finite element toolbox in Julia - Published in JOSS (2022)
Science Score: 95.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 6 DOI reference(s) in README and JOSS metadata -
✓Academic publication links
Links to: joss.theoj.org, zenodo.org -
✓Committers with academic emails
7 of 11 committers (63.6%) from academic institutions -
○Institutional organization owner
-
✓JOSS paper metadata
Published in Journal of Open Source Software
Keywords
Keywords from Contributors
Repository
Parallel distributed-memory version of Gridap
Basic Info
Statistics
- Stars: 125
- Watchers: 8
- Forks: 22
- Open Issues: 22
- Releases: 27
Topics
Metadata Files
README.md
GridapDistributed
Parallel distributed-memory version of Gridap.jl.
Purpose
GridapDistributed.jl provides a fully-parallel distributed memory extension of the Gridap.jl library. It allows users to approximate PDEs on parallel computers, from multi-core CPU desktop computers to HPC clusters and supercomputers. The sub-package is designed to be as non-intrusive as possible. As a result, sequential Julia scripts written in the high level API of Gridap.jl can be used almost verbatim up to minor adjustments in a parallel context using GridapDistributed.jl.
At present, GridapDistributed.jl provides scalable parallel data structures for grid handling, finite element spaces setup, and distributed linear system assembly. For the latter part, i.e., global distributed sparse matrices and vectors, GridapDistributed.jl relies on PartitionedArrays.jl as distributed linear algebra backend.
Documentation
GridapDistributed.jl and Gridap.jl share almost the same high-level API. We refer to the documentation of Gridap.jl for more details about the API. In the example below, we show the minor differences among the APIs of Gridap.jl and GridapDistributed.jl. We also refer to the following tutorial and the GridapDistributed.jl documentation for additional examples and rationale.
Execution modes and how to execute the program in each mode
GridapDistributed.jl driver programs can be either run in debug execution mode (very useful for developing/debugging parallel programs, see test/sequential/ folder for examples) or in message-passing (MPI) execution mode (when you want to deploy the code in the actual parallel computer and perform a fast simulation, see test/mpi/ folder for examples). In any case, even if you do no have access to a parallel machine, you should be able to run in both modes in your local desktop/laptop.
A GridapDistributed.jl driver program written in debug execution mode as, e.g., the one available at test/sequential/PoissonTests.jl, is executed from the terminal just as any other Julia script:
bash
julia test/sequential/PoissonTests.jl
On the other hand, a driver program written in MPI execution mode, such as the one shown in the snippet in the next section, involves an invocation of the mpiexecjl script (see below):
mpiexecjl -n 4 julia gridap_distributed_mpi_mode_example.jl
with the appropriate number of MPI tasks, -n 4 in this particular example.
Simple example (MPI-parallel execution mode)
The following Julia code snippet solves a 2D Poisson problem in parallel on the unit square. The example follows the MPI-parallel execution mode (note the with_mpi() function call) and thus it must be executed on 4 MPI tasks (note the mesh is partitioned into 4 parts) using the instructions below. If a user wants to use the debug execution mode, one just replaces with_mpi() by with_debug().
GridapDistributed.jl debug execution mode scripts are executed as any other julia sequential script.
julia
using Gridap
using GridapDistributed
using PartitionedArrays
function main(ranks)
domain = (0,1,0,1)
mesh_partition = (2,2)
mesh_cells = (4,4)
model = CartesianDiscreteModel(ranks,mesh_partition,domain,mesh_cells)
order = 2
u((x,y)) = (x+y)^order
f(x) = -Δ(u,x)
reffe = ReferenceFE(lagrangian,Float64,order)
V = TestFESpace(model,reffe,dirichlet_tags="boundary")
U = TrialFESpace(u,V)
Ω = Triangulation(model)
dΩ = Measure(Ω,2*order)
a(u,v) = ∫( ∇(v)⋅∇(u) )dΩ
l(v) = ∫( v*f )dΩ
op = AffineFEOperator(a,l,U,V)
uh = solve(op)
writevtk(Ω,"results",cellfields=["uh"=>uh,"grad_uh"=>∇(uh)])
end
with_mpi() do distribute
ranks = distribute_with_mpi(LinearIndices((4,)))
main(ranks)
end
The domain is discretized using the parallel Cartesian-like mesh generator built-in in GridapDistributed. The only minimal difference with respect to the sequential Gridap script is a call to the with_mpi function of PartitionedArrays.jl right at the beginning of the program. With this function, the programmer sets up the PartitionedArrays.jl communication backend (i.e., MPI in the example), specifies, and provides a function to be run on each part (using Julia do-block syntax in the example). The function body is equivalent to a sequential Gridap script, except for the CartesianDiscreteModel call, which in GridapDistributed also requires the ranks and mesh_partition arguments to this function.
Using parallel solvers
GridapDistributed.jl is not a library of parallel linear solvers. The linear solver kernel within GridapDistributed.jl, defined with the backslash operator \, is just a sparse LU solver applied to the global system gathered on a master task (not scalable, but very useful for testing and debug purposes).
We provide the full set of scalable linear and nonlinear solvers in the PETSc library in GridapPETSc.jl. For an example which combines GridapDistributed with GridapPETSc.jl, we refer to the following tutorial. Additional examples can be found in the test/ folder of GridapPETSc.
We also provide native Julia parallel solvers and preconditioners in GridapSolvers.jl.
Partitioned meshes
GridapDistributed.jl provides a built-in parallel generator of Cartesian-like meshes of arbitrary-dimensional, topologically n-cube domains.
Distributed unstructured meshes are generated using GridapGmsh.jl. We also refer to GridapP4est.jl, for peta-scale handling of meshes which can be decomposed as forest of quadtrees/octrees of the computational domain. Examples of distributed solvers that combine all these building blocks can be found in the following tutorial.
A more complex example (MPI-parallel execution mode)
In the following example, we combine GridapDistributed (for the parallel implementation of the PDE discretisation), GridapGmsh (for the distributed unstructured mesh), and GridapPETSc (for the linear solver step). The mesh file can be found here.
julia
using Gridap
using GridapGmsh
using GridapPETSc
using GridapDistributed
using PartitionedArrays
function main(ranks)
options = "-ksp_type cg -pc_type gamg -ksp_monitor"
GridapPETSc.with(args=split(options)) do
model = GmshDiscreteModel(ranks,"demo.msh")
order = 1
dirichlet_tags = ["boundary1","boundary2"]
u_boundary1(x) = 0.0
u_boundary2(x) = 1.0
reffe = ReferenceFE(lagrangian,Float64,order)
V = TestFESpace(model,reffe,dirichlet_tags=dirichlet_tags)
U = TrialFESpace(V,[u_boundary1,u_boundary2])
Ω = Interior(model)
dΩ = Measure(Ω,2*order)
a(u,v) = ∫( ∇(u)⋅∇(v) )dΩ
l(v) = 0
op = AffineFEOperator(a,l,U,V)
solver = PETScLinearSolver()
uh = solve(solver,op)
writevtk(Ω,"demo",cellfields=["uh"=>uh])
end
end
with_mpi() do distribute
ranks = distribute_with_mpi(LinearIndices((6,)))
main(ranks)
end
Build
Before using GridapDistributed.jl package, one needs to build the MPI.jl package. We refer to the main documentation of this package for configuration instructions.
MPI-parallel Julia script execution instructions
In order to execute a MPI-parallel GridapDistributed.jl driver, we can leverage the mpiexecjl script provided by MPI.jl. (Click here for installation instructions). As an example, assuming that we are located on the root directory of GridapDistributed.jl,
an hypothetical MPI-parallel GridapDistributed.jl driver named driver.jl can be executed on 4 MPI tasks as:
mpiexecjl --project=. -n 4 julia -J sys-image.so driver.jl
where -J sys-image.so is optional, but highly recommended in order to reduce JIT compilation times. Here, sys-image.so is assumed to be a Julia system image pre-generated for the driver at hand using the PackageCompiler.jl package. See the test/TestApp/compile folder for example scripts with system image generation along with a test application with source available at test/TestApp/. These scripts are triggered from .github/workflows/ci.yml file on Github CI actions.
Known issues
A warning when executing MPI-parallel drivers: Data race conditions in the generation of precompiled modules in cache. See here.
How to cite GridapDistributed
In order to give credit to the Gridap and GridapDistributed contributors, we simply ask you to cite the Gridap main project as indicated here and the sub-packages you use as indicated in the corresponding repositories. Please, use the reference below in any publication in which you have made use of GridapDistributed:
@article{Badia2022,
doi = {10.21105/joss.04157},
url = {https://doi.org/10.21105/joss.04157},
year = {2022},
publisher = {The Open Journal},
volume = {7},
number = {74},
pages = {4157},
author = {Santiago Badia and Alberto F. Martín and Francesc Verdugo},
title = {GridapDistributed: a massively parallel finite element toolbox in Julia},
journal = {Journal of Open Source Software}
}
Contributing to GridapDistributed
GridapDistributed is a collaborative project open to contributions. If you want to contribute, please take into account:
- Before opening a PR with a significant contribution, contact the project administrators by opening an issue describing what you are willing to implement. Wait for feedback from other community members.
- We adhere to the contribution and code-of-conduct instructions of the Gridap.jl project, available here and here, resp. Please, carefully read and follow the instructions in these files.
- Open a PR with your contribution.
Want to help? We have issues waiting for help. You can start contributing to the GridapDistributed project by solving some of those issues.
Owner
- Name: Gridap
- Login: gridap
- Kind: organization
- Repositories: 21
- Profile: https://github.com/gridap
Software ecosystem to solve PDEs in Julia
JOSS Publication
GridapDistributed: a massively parallel finite element toolbox in Julia
Authors
Tags
Partial Differential Equations Finite Elements Distributed memory parallelization High Performance ComputingGitHub Events
Total
- Create event: 15
- Commit comment event: 9
- Release event: 4
- Issues event: 8
- Watch event: 17
- Delete event: 12
- Issue comment event: 28
- Push event: 93
- Pull request event: 25
- Fork event: 5
Last Year
- Create event: 15
- Commit comment event: 9
- Release event: 4
- Issues event: 8
- Watch event: 17
- Delete event: 12
- Issue comment event: 28
- Push event: 93
- Pull request event: 25
- Fork event: 5
Committers
Last synced: 5 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| amartin | a****n@m****u | 379 |
| JordiManyer | j****r@m****u | 261 |
| Francesc Verdugo | f****o@c****u | 133 |
| Oriol Colomes | o****s@g****m | 31 |
| Santiago Badia | s****a@m****u | 17 |
| dependabot[bot] | 4****] | 5 |
| principe | p****e@c****u | 3 |
| Pere Antoni Martorell | p****l@u****u | 3 |
| Eric Neiva | e****a@c****r | 2 |
| Pietro Monticone | 3****e | 1 |
| Hendrik Ranocha | m****l@r****e | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 4 months ago
All Time
- Total issues: 56
- Total pull requests: 137
- Average time to close issues: 6 months
- Average time to close pull requests: 27 days
- Total issue authors: 18
- Total pull request authors: 14
- Average comments per issue: 2.75
- Average comments per pull request: 1.48
- Merged pull requests: 113
- Bot issues: 0
- Bot pull requests: 12
Past Year
- Issues: 5
- Pull requests: 28
- Average time to close issues: about 4 hours
- Average time to close pull requests: 7 days
- Issue authors: 3
- Pull request authors: 4
- Average comments per issue: 1.0
- Average comments per pull request: 0.07
- Merged pull requests: 22
- Bot issues: 0
- Bot pull requests: 12
Top Authors
Issue Authors
- amartinhuertas (23)
- JordiManyer (6)
- DevasmitDutta (4)
- fverdugo (3)
- oriolcg (3)
- shreyas02 (3)
- ZiwenH (2)
- PetrKryslUCSD (2)
- emvanderlinde (1)
- JuliaTagBot (1)
- eschnett (1)
- zjwegert (1)
- janmodderman (1)
- carlodev (1)
- Jai-Tushar (1)
Pull Request Authors
- JordiManyer (52)
- amartinhuertas (35)
- fverdugo (15)
- dependabot[bot] (12)
- oriolcg (8)
- principejavier (4)
- pmartorell (3)
- shreyas02 (2)
- pitmonticone (1)
- lamBOOO (1)
- xuanxu (1)
- ranocha (1)
- carlodev (1)
- ericneiva (1)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 1
-
Total downloads:
- julia 194 total
- Total dependent packages: 6
- Total dependent repositories: 0
- Total versions: 27
juliahub.com: GridapDistributed
Parallel distributed-memory version of Gridap
- Documentation: https://docs.juliahub.com/General/GridapDistributed/stable/
- License: MIT
-
Latest release: 0.4.9
published 5 months ago
Rankings
Dependencies
- JuliaRegistries/TagBot v1 composite
- actions/cache v1 composite
- actions/checkout v2 composite
- codecov/codecov-action v1 composite
- julia-actions/julia-buildpkg v1 composite
- julia-actions/julia-processcoverage v1 composite
- julia-actions/setup-julia v1 composite
