Science Score: 28.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
○codemeta.json file
-
○.zenodo.json file
-
○DOI references
-
○Academic publication links
-
✓Committers with academic emails
7 of 8 committers (87.5%) from academic institutions -
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (13.6%) to scientific vocabulary
Keywords
Repository
ALPACA: Adaptive Level-set PArallel Code
Basic Info
Statistics
- Stars: 20
- Watchers: 3
- Forks: 6
- Open Issues: 5
- Releases: 0
Topics
Metadata Files
README.md
ALPACA
Overview
ALPACA is an MPI-parallelized C++ code framework to simulate compressible multiphase flow physics. It allows for advanced high-resolution sharp-interface modeling empowered with efficient multiresolution compression. The modular code structure offers a broad flexibility to select among many most-recent numerical methods covering WENO/T-ENO, Riemann solvers (complete/incomplete), strong-stability preserving Runge-Kutta time integration schemes, level-set methods and many more.
Getting Started
Installation
Recursively cloning in the case of a fresh installation
bash
git clone --recursive https://github.com/tumaer/ALPACA.git
or in the case of an existing download
bash
git fetch && git submodule update --init --recursive
After which we first need to install ALPACA's dependencies, ALPACA depends on
- MPI
- HDF5
On clusters, the two are likely going to be available as module to load. Outside of such computing environment, we need to make sure that we have them available on our system.
MPI Installation Instructions
To install and setup MPI, we have the choice of using [OpenMPI](https://www.open-mpi.org), and [MPICH](https://www.mpich.org). This instruction here is for OpenMPI, but applies equally as much for MPICH. Creating the build directory: ```bash mkdir mpi-build && export MPI_BUILD_DIR=$(PWD)/mpi-build ``` To then begin the installation of MPI, we first have to download the source: ```bash wget https://download.open-mpi.org/release/open-mpi/v4.1/openmpi-4.1.5.tar.gz tar -xzf openmpi-4.1.5.tar.gz && cd openmpi-4.1.5 ``` We then have to configure our installation, and compile the library: ```bash ./configure --prefix=$MPI_BUILD_DIR make -j && make install ``` After which we are left to export the MPI directories: ```bash export PATH=$MPI_BUILD_DIR/bin:$PATH export LD_LIBRARY_PATH=$MPI_BUILD_DIR/lib:$LD_LIBRARY_PATH ``` > If your cluster environment comes with its own MPI library, you should **always** prefer using the system MPI library over doing a source install.HDF5 Installation Instructions
To install HDF5, we roughly follow the same outlines as the ones for the MPI installation. Creating the build directory: ```bash mkdir hdf5-build && export HDF5_BUILD_DIR=$(pwd)/hdf5-build mkdir hdf5-install && export HDF5_INSTALL_DIR=$(pwd)/hdf5-install ``` To then begin the installation of [HDF5](https://www.hdfgroup.org/downloads/hdf5/source-code/), we have to get the source, and then unpack it: ```bash wget https://support.hdfgroup.org/ftp/HDF5/releases/hdf5-1.8/hdf5-1.8.23/src/hdf5-1.8.23.tar.gz tar -xzf hdf5-1.8.23.tar.gz && cd hdf5-1.8.23 ``` Set the compilers to be the MPI-compilers: ```bash export CXX=mpic++ export CC=mpicc ``` After which we have to configure our installation, and then compile the library: ```bash cmake -GNinja -B ../hdf5-build/ -S . \ -DCMAKE_INSTALL_DIR=$(pwd)/../hdf5-install \ -DCMAKE_BUILD_TYPE=Release \ -DCMAKE_C_COMPILER=$(pwd)/../mpi-build/bin/mpicc \ -DCMAKE_CXX_COMPILER=$(pwd)/../mpi-build/bin/mpic++ \ -DHDF5_ENABLE_PARALLEL=On \ -DHDF5_BUILD_CPP_LIB=On \ -DALLOW_UNSUPPORTED=On ``` To then build and install from the build directory ```bash cd $HDF5_BUILD_DIR ninja && ninja install ```Having MPI & HDF5, we can then install ALPACA with
bash
cmake -GNinja -B ../alpaca-build/ -S . \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_C_COMPILER=mpicc \
-DCMAKE_CXX_COMPILER=mpicxx \
-DHDF5_DIR=$HDF5_INSTALL_DIR/cmake
to build, we then invoke CMake again
bash
cmake --build ../alpaca-build/
We highly recommend using
ccachetogether with CMake. To do so, add the following flags to the configuration step of CMake:
bash -DCMAKE_C_COMPILER_LAUNCHER=ccache -DCMAKE_CXX_COMPILER_LAUNCHER=ccache
Testing
To validate the installation, we recommend running unit-tests after the completed installation. To do so
bash
ninja Paco -j 4
after which we can run single-, as well as two-core tests to verify the correctness of the installation.
bash
mpiexec -n 1 ./Paco [1rank]
mpiexec -n 2 ./Paco [2rank]
For further instructions, first steps, and API documentation, please consult the ReadTheDocs.
Academic Usage
If you use ALPACA in an academic setting, please cite our papers.
Acknowledgments
ALPACA has received support from multiple funding bodies over the course of its inception:
- This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program: ERC Advanced Grant No. 667483, Prof. Dr. Nikolaus A. Adams, "NANOSHOCK - Manufacturing Shock Interactions for Innovative Nanoscale Processes"
- This project has received computing time on the GCS Supercomputer SuperMUC at Leibniz Supercomputing Centre (www.lrz.de) from the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu).
- This project has received funding from German Research Foundation (DFG).
- This project has received funding from the Bavarian State Ministry of Science and the Arts through the Competence Network for Scientific High Performance Computing in Bavaria (KONWIHR).
Owner
- Name: Chair of Aerodynamics and Fluid Mechanics
- Login: tumaer
- Kind: organization
- Email: github-admin@aer.mw.tum.de
- Location: Garching, Germany
- Website: https://www.aer.mw.tum.de
- Repositories: 6
- Profile: https://github.com/tumaer
Official GitHub Account of the Chair of Aerodynamics and Fluid Mechanics of the Technical University of Munich.
Citation (CITATION.bib)
@article{hoppe2022parallel,
title={A parallel modular computing environment for three-dimensional multiresolution simulations of compressible flows},
author={Hoppe, Nils and Adami, Stefan and Adams, Nikolaus A},
journal={Computer Methods in Applied Mechanics and Engineering},
volume={391},
pages={114486},
year={2022},
publisher={Elsevier}
}
@article{hoppe2022alpaca,
title={ALPACA-a level-set based sharp-interface multiresolution solver for conservation laws},
author={Hoppe, Nils and Winter, Josef M and Adami, Stefan and Adams, Nikolaus A},
journal={Computer Physics Communications},
volume={272},
pages={108246},
year={2022},
publisher={Elsevier}
}
GitHub Events
Total
- Watch event: 6
- Fork event: 1
Last Year
- Watch event: 6
- Fork event: 1
Committers
Last synced: about 2 years ago
Top Committers
| Name | Commits | |
|---|---|---|
| Ludger Paehler | l****r@t****e | 31 |
| Nils Hoppe | n****e@t****e | 17 |
| Alexander Bußmann | a****n@t****e | 15 |
| Nils Hoppe | 1****e@u****e | 13 |
| Josef Winter | j****r@t****e | 12 |
| Jakob Kaiser | j****r@t****e | 7 |
| Nico Fleischmann | n****n@t****e | 5 |
| Thomas Paula | t****a@t****e | 2 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: about 2 years ago
All Time
- Total issues: 9
- Total pull requests: 4
- Average time to close issues: 4 days
- Average time to close pull requests: 7 minutes
- Total issue authors: 1
- Total pull request authors: 2
- Average comments per issue: 0.0
- Average comments per pull request: 0.0
- Merged pull requests: 3
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 9
- Pull requests: 4
- Average time to close issues: 4 days
- Average time to close pull requests: 7 minutes
- Issue authors: 1
- Pull request authors: 2
- Average comments per issue: 0.0
- Average comments per pull request: 0.0
- Merged pull requests: 3
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- ludgerpaehler (9)
Pull Request Authors
- ludgerpaehler (3)
- ArashPartow (1)
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- actions/checkout v3 composite
- DoozyX/clang-format-lint-action v0.11 composite
- actions/checkout v3 composite
- h5py *
- mpmath *
- numpy *
- pandas *
- python-dateutil *
- pytz *
- scipy *
- six *
- sympy *