Singularity-EOS

Singularity-EOS: Performance Portable Equations of State and Mixed Cell Closures - Published in JOSS (2024)

https://github.com/lanl/singularity-eos

Science Score: 98.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 1 DOI reference(s) in JOSS metadata
  • Academic publication links
  • Committers with academic emails
    27 of 39 committers (69.2%) from academic institutions
  • Institutional organization owner
    Organization lanl has institutional domain (www.lanl.gov)
  • JOSS paper metadata
    Published in Journal of Open Source Software

Scientific Fields

Biochemistry, Genetics and Molecular Biology Life Sciences - 40% confidence
Last synced: 6 months ago · JSON representation

Repository

Performance portable equations of state and mixed cell closures

Basic Info
Statistics
  • Stars: 32
  • Watchers: 7
  • Forks: 21
  • Open Issues: 92
  • Releases: 13
Created almost 5 years ago · Last pushed 6 months ago
Metadata Files
Readme Changelog License

README.build.md

Overview

The singularity-eos build system is designed with two goals in mind

  1. Portability to a wide range of host codes, system layouts, and underlying hardware
  2. Ease of code development, and flexibility for developers

These considerations continue to guide development of the tools and workflows in working with singularity-eos.

Basics

The build of singularity-eos can take two forms:

  1. Submodule mode
  2. Standalone mode

These will be described in more detail below, but in brief submodule mode is intended for downstream codes that build singularity-eos source code directly in the build (sometimes referred to as "in-tree"), while standalone mode will build singularity-eos as an independent library that can be installed onto the system.

The most important distinction between the modes is how dependencies are handled. submodule mode will use internal source clones of key dependencies (located in utils\), effectively building these dependencies as part of the overall singularity-eos build procedure. It should be noted, however, that there are optional dependencies that are not provided internally and must be separately available.

In standalone mode, all dependencies must be available in the environment, and be discoverable to CMake. While not required, it is encouraged to use the dependency management tool spack to help facilitate constructing a build environment, as well as deploying singularity-eos. Example uses of spack for these purposes are provided below.

A CMake configuration option is provided that allows developers to select a specific mode (SINGULARITY_FORCE_SUBMODULE_MODE), however this is intended for internal development only. The intended workflow is to let singularity-eos decide that appropriate mode, which it decides based on inspecting the project directory that the source resides in.

Options for configuring the build

Most configuration options are the same between the two builds. standalone / submodule specific options are touched on in the sections detailing those build modes.

The main CMake options to configure building are in the following table:

| Option | Default | Comment | |--|--|--| | SINGULARITY_USE_SPINER | ON | Enables EOS objects that use spiner.| | SINGULARITY_USE_FORTRAN | ON | Enable Fortran API for equation of state.| | SINGULARITY_USE_KOKKOS | OFF | Uses Kokkos as the portability backend. Currently only Kokkos is supported for GPUs.| | SINGULARITY_USE_EOSPAC | OFF | Link against EOSPAC. Needed for sesame2spiner and some tests.| | SINGULARITY_BUILD_TESTS | OFF | Build test infrastructure.| | SINGULARITY_BUILD_PYTHON | OFF | Build Python bindings.| | SINGULARITY_INVERT_AT_SETUP | OFF | For tests, pre-invert eospac tables.| | SINGULARITY_BETTER_DEBUG_FLAGS | ON | Enables nicer GPU debug flags. May interfere with in-tree builds as a submodule.| | SINGULARITY_HIDE_MORE_WARNINGS | OFF | Makes warnings less verbose. May interfere with in-tree builds as a submodule.| | SINGULARITY_FORCE_SUBMODULE_MODE | OFF | Force build in submodule mode.| | SINGULARITY_USE_SINGLE_LOGS | OFF | Use single precision logarithms (may degrade accuracy).| | SINGULARITY_USE_TRUE_LOG_GRIDDING | OFF | Use grids that conform to logarithmic spacing.|

More options are available to modify only if certain other options or variables satisfy certain conditions (dependent options). Dependent options can only be accessed if their precondition is satisfied.

If the precondition is satisfied, they take on a default value, although they can be changed. If the precondition is not satisfied, then their value is fixed and cannot be changed. For instance,

```bash

in /build

cmake .. -DSINGULARITYUSEKOKKOS=OFF -DSINGULARITYUSECUDA=ON ```

Will have no effect (i.e. SINGULARITY_USE_CUDA will be set to OFF), because the precondition of SINGULARITY_USE_CUDA is for SINGULARITY_USE_KOKKOS=ON.

Generally, dependent options should only be used for specific use-cases where the defaults are not applicable. For most scenarios, the preconditions and defaults are logically constructed and the most natural in practice (SINGULARITY_TEST_* are only available if SINGLARITY_BUILD_TESTS is enabled, for instance).

These options are listed in the following table, along with their preconditions:

|Option|Precondition|Default (condition true/false)| Comment | |--|--|--|--| | SINGULARITY_USE_SPINER_WITH_HDF5 | SINGULARITY_USE_SPINER=ON | ON/OFF | Requests that spiner be configured for HDF5 support.| | SINGULARITY_USE_CUDA | SINGULARITY_USE_KOKKOS=ON | ON/OFF | Target nvidia GPUs for Kokkos offloading.| | SINGULARITY_USE_KOKKOSKERNELS | SINGULARITY_USE_KOKKOS=ON | ON/OFF |Use Kokkos Kernels for linear algebra. Needed for mixed cell closure models on GPU.| | SINGULARITY_BUILD_CLOSURE | SINGULARITY_USE_KOKKOS=ON SINGULARITY_USE_KOKKOSKERNELS=ON | ON/OFF | Mixed cell closure.| | SINGULARITY_BUILD_SESAME2SPINER | SINGULARITY_USE_SPINER=ON SINGULARITY_USE_SPINER_WITH_HDF5=ON | ON/OFF | Builds the conversion tool sesame2spiner which makes files readable by SpinerEOS.| | SINGULARITY_BUILD_STELLARCOLLAPSE2SPINER | SINGULARITY_USE_SPINER=ON SINGULARITY_USE_SPINER_WITH_HDF5=ON | ON/OFF | Builds the conversion tool stellarcollapse2spiner which optionally makes stellar collapse files faster to read.| | SINGULARITY_TEST_SESAME | SINGULARITY_BUILD_TESTS=ON SINGULARITY_BUILD_SESAME2SPINER=ON | ON/OFF | Test the Sesame table readers.| | SINGULARITY_TEST_STELLAR_COLLAPSE | SINGULARITY_BUILD_TESTS=ON SINGULARITY_BUILD_STELLARCOLLAPSE2SPINER=ON | ON/OFF | Test the Stellar Collapse table readers.| | SINGULARITY_TEST_PYTHON | SINGULARITY_BUILD_TESTS=ON SINGULARITY_BUILD_PYTHON=ON | ON/OFF | Test the Python bindings.|

CMake presets

To further aid the developer, singularity-eos is distributed with Presets, a list of common build options with naturally named labels that when used can reduce the need to input and remember the many options singularity-eos uses. For a general overview of CMake presets, see the cmake documentation on presets

Predefined presets

Predefined presets are described with a json schema in the file CMakePresets.json. As an example:

```bash

in /build

$> cmake .. --preset="basicwithtesting" Preset CMake variables:

CMAKEEXPORTCOMPILECOMMANDS="ON" SINGULARITYBUILDTESTS="ON" SINGULARITYUSEEOSPAC="ON" SINGULARITYUSE_SPINER="ON"

...

```

As you can see, CMake reports the configuration variables that the preset has used, and their values. A list of presets can be easily examined with:

```bash

in /build

$> cmake .. --list-presets Available configure presets:

"basic" "basicwithtesting" "kokkosnogpu" "kokkosnogpuwithtesting" "kokkosgpu" "kokkosgpuwithtesting" ```

When using presets, additional options may be readily appended to augment the required build. For example, suppose that the basic preset is mostly sufficient, but you would like to enable building the closure models:

```bash

in /build

$> cmake .. --preset="basicwithtesting" -DSINGULARITYBUILDCLOSURE=ON

...

```

User defined presets

The CMake preset functionality includes the ability of developers to define local presets in CMakeUserPresets.json. singularity-eos explicitly does not track this file in Git, so developers can construct their own presets. All presets in the predefined CMakePresets.json are automatically included by CMake, so developers can build off of those if needed.

For instance, suppose you have a local checkout of the kokkos and kokkos-kernels codes that you're using to debug a GPU build, and you have these installed in ~/scratch/. Your CMakeUserPresets.json could look like:

json { "version": 1, "cmakeMinimumRequired": { "major": 3, "minor": 19 }, "configurePresets": [ { "name": "my_local_build", "description": "submodule build using a local scratch install of kokkos", "inherits": [ "kokkos_gpu_with_testing" ], "cacheVariables": { "Kokkos_DIR": "$env{HOME}/scratch/kokkos/lib/cmake/Kokkos", "KokkosKernels_DIR": "$env{HOME}/scratch/kokkoskernels/lib/cmake/KokkosKernels", "SINGULARITY_BUILD_PYTHON": "ON", "SINGULARITY_TEST_PYTHON": "OFF" } } ] }

This inherits the predefined kokkos_gpu_with_testing preset, sets the Kokkos*_DIR cache variables to point find_package() to use these directories, and finally enables building the python bindings without including the python tests.

Building in submodule mode

For submodule mode to activate, a clone of the singularity-eos source should be placed below the top-level of a host project

```bash

An example directory layout when using singularity-eos in submodule mode

myproject |CMakeLists.txt |README.md |src |include |tpl/singularity-eos `` singularity-eosis then imported using theadd_subdirectory()` command in CMake

```cmake

In your CMakeLists.txt

cmakeminimumrequired(VERSION 3.19) project(my_project)

addexecutable(myexec src/main.cc) targetincludedirectories(my_exec include)

add_subdirectory(tpl/singularity-eos)

targetlinklibraries(my_exec singularity-eos::singularity-eos) ```

This will expose the singularity-eos interface and library to your code, along with the interfaces of the internal dependencies

```c++ // in source of my_project

include

// from the internal ports-of-call submodule

include

// ...

using namespace singularity; ```

singularity-eos will build (along with internal dependencies) and be linked directly to your executable.

The git submoudles may change during development, either by changing the pinned hash, addition or removal of submodules. If you have errors that appear to be the result of incompatible code, make sure you have updated your submodules with

bash git submodule update --init --recursive

Building in standalone mode

For standalone mode, all required and optional dependencies are expected to be discoverable by CMake. This can be done several ways

  1. (preferred) Use Spack to configure and install all the dependencies needed to build.
  2. Use a system package manager (apt-get, yum, &t) to install dependencies.
  3. Hand-build to a local filesystem, and configure your shell or CMake invocation to be aware of these installs

standalone mode is the mode used to install singularity-eos to a system as a common library. If, for example, you use Spack to to install packages, singularity-eos will be built and installed in standalone mode.

Building with Spack

Spack is a package management tool that is designed specifically for HPC environments, but may be used in any compute environment. It is useful for gathering, configuring and installing software and it's dependencies self-consistently, and can use existing software installed on the system or do a "full" install of all required (even system) packages in a local directory.

Spack remains under active development, and is subject to rapid change in interface, design, and functionality. Here we will provide an overview of how to use Spack to develop and deploy singularigy-eos, but for more in-depth information, please refer to the official Spack documentation.

Preperation

First, we need to clone the Spack repository. You can place this anywhere, but note that by default Spack will download and install software under this directory. This default behavior can be changed, please refer to the documentation for information of customizing your Spack instance.

bash $> cd ~ $> git clone https://github.com/spack/spack.git

To start using Spack, we use the provided activation script

```bash

equivalent scripts for tcsh, fish are located here as well

$> source ~/spack/share/spack/setup-env.sh ```

You will always need to activate spack for each new shell. You may find it convienant to invoke this Spack setup in your login script, though be aware that Spack will prepend paths to your environment which may cause conflicts with other package tools and software.

The first time a Spack command is invoked, it will need to bootstrap itself to be able to start concretizing package specs. This will download pre-built packages and create a ${HOME}/.spack directory. This directory is important and is where your primary Spack configuration data will be located. If at any point this configuration becomes corrupted or too complicated to easily fix, you may safely remove this directory to restore the default configuration, or just to try a new approach. Again, refer to the Spack documentaion for more information.

Setup compilers

To use Spack effectively, we need to configure it for the HPC environment we're using. This can be done manually (by editing packages.yaml, compilers.yaml, and perhaps a few others). This is ideal if you understand how your software environment is installed on the HPC system, and you are fluent in the Spack configuration schema.

However, Spack has put in a lot of effort to be able to automatically discover the available tools and software on any given system. While not perfect, we can get a fairly robust starting point.

Assume we are on an HPC system that has Envionrmental Modules that provides compilers, MPI implementations, and sundry other common tools. To help Spack find these, let's load a specific configuration into the active shell environment.

```bash $> module load cmake/3.19.2 gcc/11.2.0 openmpi/4.1.1 python/3.10 $> module list

Currently Loaded Modules: 1) cmake/3.19.2 2) gcc/11.2.0 3) openmpi/4.1.1 4) python/3.10-anaconda-2023.03 ```

First, let's find the available compilers. (If this is the first Spack command you've run, it will need to bootstrap)

bash $> spack compiler find ==> Added 2 new compilers to ${HOME}/.spack/linux/compilers.yaml gcc@4.8.5 gcc@11.2.0 ==> Compilers are defined in the following files: ${HOME}/.spack/linux/compilers.yaml

Here, we find the default system compiler (gcc@4.8.5), along with the compiler from the module we loaded. Also notice that the ${HOME}/.spack directory has been modified with some new YAML config files. These are information on the compilers and how Spack will use them. You are free to modify these files, but for now let's leave them as is.

NB: You can repeat this procedure for other compilers and packages, though if you need to use many different combinations of compiler/software, you will find using Spack environments more convenient.

Setup system-provided packages

Next, we will try and find system software (e.g. ncurses,git,zlib) that we can use instead of needing to build our own. This will also find the module software we loaded (cmake,openmpi,python). (This command will take a couple minutes to complete).

```bash $> spack external find --all --not-buildable ==> The following specs have been detected on this system and added to ${HOME}/.spack/packages.yaml autoconf@2.69 bzip2@1.0.6 coreutils@8.22 dos2unix@6.0.3 gcc@11.2.0 go@1.16.5 hdf5@1.8.12 libfuse@3.6.1 ncurses@6.4.20221231 openssl@1.1.1t python@3.10.9 sqlite@3.7.17 texlive@20130530 automake@1.13.4 bzip2@1.0.8 cpio@2.11 doxygen@1.8.5 gettext@0.19.8.1 go@1.18.4 hdf5@1.10.6 libtool@2.4.2 ninja@1.10.2 perl@5.16.3 rdma-core@22.4 sqlite@3.40.1 which@2.20 bash@4.2.46 ccache@3.7.7 curl@7.29.0 file@5.11 ghostscript@9.25 go-bootstrap@1.16.5 krb5@1.15.1 lustre@2.12.9 opencv@2.4.5 pkg-config@0.27.1 rsync@3.1.2 subversion@1.7.14 xz@5.2.2 berkeley-db@5.3.21 cmake@2.8.12.2 curl@7.87.0 findutils@4.5.11 git@2.18.4 go-bootstrap@1.18.4 krb5@1.19.4 m4@1.4.16 openjdk@1.8.0_372-b07 python@2.7.5 ruby@2.0.0 swig@2.0.10 xz@5.2.10 binutils@2.27.44 cmake@3.17.5 cvs@1.11.23 flex@2.5.37 git-lfs@2.10.0 gpgme@1.3.2 libfabric@1.7.2 maven@3.0.5 openssh@7.4p1 python@3.4.10 sed@4.2.2 tar@1.26 zip@3.0 bison@3.0.4 cmake@3.19.2 diffutils@3.3 gawk@4.0.2 gmake@3.82 groff@1.22.2 libfuse@2.9.2 ncurses@5.9.20130511 openssl@1.0.2k-fips python@3.6.8 slurm@23.02.1 texinfo@5.1

-- no arch / gcc@11.2.0 ----------------------------------------- openmpi@4.1.1 ```

Generally you will want to use as much system-provided software as you can get away with (in Spack speak, these are called externals, though external packages are not limited to system provided ones and can point to, e.g., a manual install). In the above command, we told Spack to mark any packages it can find as not-buildable, which means that Spack will never attempt to build that package and will always use the external one. This may cause issues in resolving packages specs when the external is not compatible with the requirements of an downstream package.

As a first pass, we will use --not-buildable for spack external find, but if you have any issues with concretizing then start this guide over (remove ${HOME}/.spack and go back to compilers) and do not use --not-buildable in the previous command. You may also manually edit the packages.yaml file to switch the buildable flag for the troublesome package, but you will need to be a least familiar with YAML schema.

First install with spack

Let's walk through a simple Spack workflow for installing. First, we want to look at the options available for a package. The Spack team and package developers have worked over the years to provide an impressive selection of packages. This example will use hypre, a parallel library for multigrid methods.

```bash $> spack info hypre AutotoolsPackage: hypre

Description: Hypre is a library of high performance preconditioners that features parallel multigrid methods for both structured and unstructured grid problems.

Homepage: https://llnl.gov/casc/hypre

Preferred version: 2.28.0 https://github.com/hypre-space/hypre/archive/v2.28.0.tar.gz

Safe versions: develop [git] https://github.com/hypre-space/hypre.git on branch master 2.28.0 https://github.com/hypre-space/hypre/archive/v2.28.0.tar.gz

... more versions listed

Variants: Name [Default] When Allowed values Description ======================== ======= ==================== ==============================================

amdgpu_target [none]        [+rocm]    none, gfx900,           AMD GPU architecture
                                       gfx1030, gfx90c,
                                       gfx90a, gfx1101,
                                       gfx908, gfx1010,

... lots of amd targets listed

build_system [autotools]    --         autotools               Build systems supported by the package
caliper [off]               --         on, off                 Enable Caliper support
complex [off]               --         on, off                 Use complex values
cuda [off]                  --         on, off                 Build with CUDA
cuda_arch [none]            [+cuda]    none, 62, 80, 90,       CUDA architecture
                                       20, 32, 35, 37, 87,
                                       10, 21, 30, 12, 61,
                                       11, 72, 13, 60, 53,
                                       52, 75, 70, 89, 86,
                                       50
debug [off]                 --         on, off                 Build debug instead of optimized version
fortran [on]                --         on, off                 Enables fortran bindings
gptune [off]                --         on, off                 Add the GPTune hookup code
int64 [off]                 --         on, off                 Use 64bit integers
internal-superlu [off]      --         on, off                 Use internal SuperLU routines
mixedint [off]              --         on, off                 Use 64bit integers while reducing memory use
mpi [on]                    --         on, off                 Enable MPI support
openmp [off]                --         on, off                 Enable OpenMP support
rocm [off]                  --         on, off                 Enable ROCm support
shared [on]                 --         on, off                 Build shared library (disables static library)
superlu-dist [off]          --         on, off                 Activates support for SuperLU_Dist library
sycl [off]                  --         on, off                 Enable SYCL support
umpire [off]                --         on, off                 Enable Umpire support
unified-memory [off]        --         on, off                 Use unified memory

Build Dependencies: blas caliper cuda gnuconfig hip hsa-rocr-dev lapack llvm-amdgpu mpi rocprim rocrand rocsparse rocthrust superlu-dist umpire

Link Dependencies: blas caliper cuda hip hsa-rocr-dev lapack llvm-amdgpu mpi rocprim rocrand rocsparse rocthrust superlu-dist umpire

Run Dependencies: None ```

The spack info commands gives us three important data-points we need. First, it tells the versions available. If you do not specify a version, the preferred version is default.

Next and most important are the variants. These are used to control how to build the package, i.e. to build with MPI, to build a fortran interface, and so on. These will have default values, and in practice you will only need to provide a small number for any particular system.

Finally, we are given the dependencies of the package. The dependencies listed are for all configurations, so some dependencies may not be necessary for your particular install. (For instance, if you do not build with cuda, then cuda will not be necessary to install)

Let's look at what Spack will do when we want to install. We will start with the default configuration (that is, all variants are left to default). The spack spec command will try to use the active Spack configuration to determine which packages are needed to install hypre, and will print the dependency tree out.

```bash $> spack spec hypre

Input spec

  • hypre

Concretized

  • hypre@2.28.0%gcc@11.2.0~caliper~complex~cuda~debug+fortran~gptune~int64~internal-superlu~mixedint+mpi~openmp~rocm+shared~superlu-dist~sycl~umpire~unified-memory build_system=autotools arch=linux-rhel7-broadwell
  • ^openblas@0.3.23%gcc@11.2.0~bignuma~consistentfpcsr+fortran~ilp64+locking+pic+shared buildsystem=makefile symbolsuffix=none threads=none arch=linux-rhel7-broadwell [e] ^perl@5.16.3%gcc@11.2.0+cpanm+opcode+open+shared+threads buildsystem=generic patches=0eac10e,3bbd7d6 arch=linux-rhel7-broadwell [e] ^openmpi@4.1.1%gcc@11.2.0~atomics~cuda~cxx~cxxexceptions~gpfs~internal-hwloc~internal-pmix~java~legacylaunchers~lustre~memchecker~openshmem~orterunprefix+pmi+romio+rsh~singularity+static+vt~wrapper-rpath buildsystem=autotools fabrics=ofi,psm,psm2 schedulers=slurm arch=linux-rhel7-broadwell ```

Here, we see the full default Spack spec, which as a rough guide is structured as <package>@<version>%<compiler>@<compiler_version>{[+/~]variants} <arch_info>. The +,~ variant prefixes are used to turn on/off variants with binary values, while variants with a set of values are given similar to keyword values (e.g. +cuda cuda_arch=70 ~shared)

If we wanted to install a different configuration, in this case say we want complex and openmp enabled, but we don't need fortran.

```bash $> spack spec hypre+complex+openmp~fortran

Input spec

  • hypre+complex~fortran+openmp

Concretized

  • hypre@2.28.0%gcc@11.2.0~caliper+complex~cuda~debug~fortran~gptune~int64~internal-superlu~mixedint+mpi+openmp~rocm+shared~superlu-dist~sycl~umpire~unified-memory build_system=autotools arch=linux-rhel7-broadwell
  • ^openblas@0.3.23%gcc@11.2.0~bignuma~consistentfpcsr+fortran~ilp64+locking+pic+shared buildsystem=makefile symbolsuffix=none threads=none arch=linux-rhel7-broadwell [e] ^perl@5.16.3%gcc@11.2.0+cpanm+opcode+open+shared+threads buildsystem=generic patches=0eac10e,3bbd7d6 arch=linux-rhel7-broadwell [e] ^openmpi@4.1.1%gcc@11.2.0~atomics~cuda~cxx~cxxexceptions~gpfs~internal-hwloc~internal-pmix~java~legacylaunchers~lustre~memchecker~openshmem~orterunprefix+pmi+romio+rsh~singularity+static+vt~wrapper-rpath buildsystem=autotools fabrics=ofi,psm,psm2 schedulers=slurm arch=linux-rhel7-broadwell ```

Here, you can see the full spec has out supplied variants. In general, variants can control build options and features, and can change which dependencies are needed.

Notice also the left-aligned string starting each line for a package. - indicates that Spack isn't aware that this package is installed (which is expected). [+] indicates that the package has been previously installed. [e] indicates that the package has been marked as externally installed.

Finally, we can install it. Because perl and openmpi are already present, Spack will not need to download, build, and install these packages. This can save lots of time! Note, however, that external packages are loosely constrained and may not be correctly configured for the requested package.

NB: By default, Spack will try to download the package source from the repository associated with the package. This behavior can be overrided with Spack mirrors , but that is beyond the scope of this doc.

```bash

```

Now, we can use Spack similarly to module load,

bash $> spack load hypre $> spack find --loaded

Other options are available for integrating Spack installed packages into your environment. For more, head over to https://spack.readthedocs.io

Developing singularigy-eos using Spack

Spack is a powerful tool that can help develop singularigy-eos for a variety of platforms and hardware.

  1. Install the dependencies singularigy-eos needs using Spack

bash $> spack install -u cmake singularity-eos@main%gcc@13+hdf5+eospac+mpi+kokkos+kokkos-kernels+openmp^eospac@6.4.0

This command will initiate an install of singularity-eos using Spack, but will stop right before singularity-eos starts to build (-u cmake means until cmake). This ensures all the necessary dependencies are installed and visible to Spack

  1. Use Spack to construct an ad-hoc shell environment

bash $> spack build-env singularity-eos@main%gcc@13+hdf5+eospac+mpi+kokkos+kokkos-kernels+openmp^eospac@6.4.0 -- bash

This command will construct a shell environment in bash that has all the dependency information populated (e.g. PREFIX_PATH, CMAKE_PREFIX_PATH, LD_LIBRARY_PATH, and so on). Even external packages from a module system will be correctly loaded. Thus, we can build for a specific combination of dependencies, compilers, and portability strategies.

```bash $> salloc -p scaling

...

$> source ~/spack/share/spack/setup-env.sh $> spack build-env singularity-eos@main%gcc@12+hdf5+eospac+mpi+kokkos+kokkos-kernels+openmp^eospac@6.4.0 -- bash $> mkdir -p buildgpumpi ; cd buildgpumpi $> cmake .. --preset="kokkosnogpuwith_testing" ```

Owner

  • Name: Los Alamos National Laboratory
  • Login: lanl
  • Kind: organization
  • Email: github-register@lanl.gov
  • Location: Los Alamos, New Mexico, USA

JOSS Publication

Singularity-EOS: Performance Portable Equations of State and Mixed Cell Closures
Published
November 07, 2024
Volume 9, Issue 103, Page 6805
Authors
Jonah M. Miller ORCID
CCS-2, Computational Physics and Methods, Los Alamos National Laboratory, USA, Center for Theoretical Astrophysics, Los Alamos National Laboratory, Los Alamos, NM
Daniel A. Holladay ORCID
Center for Theoretical Astrophysics, Los Alamos National Laboratory, Los Alamos, NM, CCS-7, Applied Computer Science, Los Alamos National Laboratory, USA
Jeffrey H. Peterson ORCID
XCP-2, Eulerian Codes, Los Alamos National Laboratory, USA
Christopher M. Mauney ORCID
Center for Theoretical Astrophysics, Los Alamos National Laboratory, Los Alamos, NM, HPC-ENV, HPC Environments, Los Alamos National Laboratory, USA
Richard Berger ORCID
CCS-7, Applied Computer Science, Los Alamos National Laboratory, USA
Anna Pietarila Graham
HPC-ENV, HPC Environments, Los Alamos National Laboratory, USA
Karen C. Tsai ORCID
CCS-7, Applied Computer Science, Los Alamos National Laboratory, USA
Brandon Barker ORCID
CCS-2, Computational Physics and Methods, Los Alamos National Laboratory, USA, Center for Theoretical Astrophysics, Los Alamos National Laboratory, Los Alamos, NM, Department of Physics and Astronomy, Michigan State University, USA, Center for Nonlinear Studies, Los Alamos National Laboratory, USA
Alexander Holas ORCID
Center for Theoretical Astrophysics, Los Alamos National Laboratory, Los Alamos, NM, CCS-7, Applied Computer Science, Los Alamos National Laboratory, USA, Heidelberg Institute for Theoretical Studies, Germany
Ann E. Mattsson ORCID
XCP-5, Materials and Physical Data, Los Alamos National Laboratory, USA
Mariam Gogilashvili ORCID
CCS-2, Computational Physics and Methods, Los Alamos National Laboratory, USA, Center for Theoretical Astrophysics, Los Alamos National Laboratory, Los Alamos, NM, Center for Nonlinear Studies, Los Alamos National Laboratory, USA, Department of Physics, Florida State University, USA
Joshua C. Dolence ORCID
CCS-2, Computational Physics and Methods, Los Alamos National Laboratory, USA, Center for Theoretical Astrophysics, Los Alamos National Laboratory, Los Alamos, NM
Chad D. Meyer ORCID
XCP-4, Continuum Models and Numerical Methods, Los Alamos National Laboratory, USA
Sriram Swaminarayan ORCID
CCS-7, Applied Computer Science, Los Alamos National Laboratory, USA
Christoph Junghans ORCID
CCS-7, Applied Computer Science, Los Alamos National Laboratory, USA
Editor
Kyle Niemeyer ORCID
Tags
Fortran equations of state thermodynamics performance portability

GitHub Events

Total
  • Create event: 79
  • Release event: 3
  • Issues event: 65
  • Watch event: 4
  • Delete event: 62
  • Member event: 3
  • Issue comment event: 304
  • Push event: 982
  • Pull request review comment event: 543
  • Pull request review event: 492
  • Pull request event: 145
  • Fork event: 12
Last Year
  • Create event: 79
  • Release event: 3
  • Issues event: 65
  • Watch event: 4
  • Delete event: 62
  • Member event: 3
  • Issue comment event: 304
  • Push event: 984
  • Pull request review comment event: 543
  • Pull request review event: 492
  • Pull request event: 145
  • Fork event: 12

Committers

Last synced: 7 months ago

All Time
  • Total Commits: 2,590
  • Total Committers: 39
  • Avg Commits per committer: 66.41
  • Development Distribution Score (DDS): 0.641
Past Year
  • Commits: 693
  • Committers: 18
  • Avg Commits per committer: 38.5
  • Development Distribution Score (DDS): 0.364
Top Committers
Name Email Commits
Jonah Miller j****m@l****v 929
Jeffrey H Peterson j****p@l****v 552
Richard Berger r****r@l****v 228
Daniel Holladay d****0 169
Christopher Mauney m****c@l****v 156
Jonah Miller j****r@g****m 63
Karen Chung-Yen Tsai k****i@l****v 61
Christopher Mauney m****c@p****v 53
Gopinath Subramanian g****s@l****v 51
Josh Dolence j****e@l****v 45
Benjamin Joel Musick b****k@p****v 37
Alexander Holas a****s@h****s 29
AlexHls a****s@h****g 26
Ann Elisabet Wills - 298385 a****s@d****v 24
Anna Pietarila Graham a****p@l****v 23
github-actions[bot] g****] 17
Ann Elisabet Wills - 298385 a****s@d****v 13
Yannick de Jong y****g@i****m 12
Ben R. Ryan b****n@l****v 11
Joshua Basabe j****e@l****v 11
Ann Elisabet Wills - 298385 a****s@d****v 11
Christoph Junghans j****s@l****v 10
Patrick Mullen p****n@p****v 9
Shane Patrick Fogerty - 322405 s****y@g****m 7
Brandon Barker b****y@p****h 6
mari2895 m****i@f****u 6
Peter Brady p****b@l****v 5
Parikshit Bajpai P****i@i****v 5
Seth James Gerberding g****g@r****v 4
c0sm0-kramer 1****r 3
and 9 more...

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 114
  • Total pull requests: 353
  • Average time to close issues: about 1 year
  • Average time to close pull requests: 19 days
  • Total issue authors: 14
  • Total pull request authors: 30
  • Average comments per issue: 1.68
  • Average comments per pull request: 3.23
  • Merged pull requests: 293
  • Bot issues: 0
  • Bot pull requests: 22
Past Year
  • Issues: 33
  • Pull requests: 165
  • Average time to close issues: 24 days
  • Average time to close pull requests: 3 days
  • Issue authors: 6
  • Pull request authors: 17
  • Average comments per issue: 0.61
  • Average comments per pull request: 1.87
  • Merged pull requests: 130
  • Bot issues: 0
  • Bot pull requests: 15
Top Authors
Issue Authors
  • Yurlungur (59)
  • jhp-lanl (35)
  • chadmeyer (4)
  • dholladay00 (3)
  • rbberger (3)
  • pbrady (2)
  • aematts (1)
  • mauneyc-LANL (1)
  • cmauney (1)
  • guadabsb15 (1)
  • pdmullen (1)
  • parikshitbajpai (1)
  • Bismarrck (1)
  • Yannicked (1)
Pull Request Authors
  • Yurlungur (120)
  • jhp-lanl (51)
  • rbberger (49)
  • github-actions[bot] (22)
  • mauneyc-LANL (14)
  • dholladay00 (13)
  • gopsub (10)
  • aematts (10)
  • pbrady (7)
  • Yannicked (6)
  • ktsai7 (5)
  • AlexHls (5)
  • AstroBarker (4)
  • pdmullen (4)
  • RyanWollaeger (4)
Top Labels
Issue Labels
enhancement (27) good first issue (17) discussion (14) bug (14) help wanted (13) clean-up (12) Robustness (9) build (9) documentation (8) Testing (8) interface (8) question (2) Performance (1)
Pull Request Labels
enhancement (39) bug (37) clean-up (23) documentation (20) build (17) Testing (14) Robustness (13) interface (11) Performance (5) discussion (2)

Dependencies

.github/workflows/deps.yml actions
  • actions/checkout v3 composite
.github/workflows/docs.yml actions
  • actions/checkout v3 composite
  • peaceiris/actions-gh-pages v3.7.3 composite
.github/workflows/formatting.yml actions
  • actions/checkout v3 composite
.github/workflows/tests.yml actions
  • actions/checkout v3 composite
.github/workflows/tests_minimal.yml actions
  • actions/checkout v3 composite