MetalWalls

MetalWalls: A classical molecular dynamics software dedicated to the simulation of electrochemical systems - Published in JOSS (2020)

https://gitlab.com/ampere2/metalwalls

Science Score: 89.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
  • .zenodo.json file
  • DOI references
    Found 11 DOI reference(s) in README and JOSS metadata
  • Academic publication links
    Links to: joss.theoj.org
  • Committers with academic emails
    11 of 41 committers (26.8%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
    Published in Journal of Open Source Software
Last synced: 4 months ago · JSON representation

Repository

Basic Info
  • Host: gitlab.com
  • Owner: ampere2
  • License: mit
  • Default Branch: release
Statistics
  • Stars: 14
  • Forks: 13
  • Open Issues: 9
  • Releases: 0
Created over 5 years ago

https://gitlab.com/ampere2/metalwalls/blob/release/

METALWALLS
==========

[![Documentation](https://img.shields.io/badge/docs-latest-brightgreen.svg)](https://gitlab.com/ampere2/metalwalls/-/wikis/home)
[![DOI](https://joss.theoj.org/papers/10.21105/joss.02373/status.svg)](https://doi.org/10.21105/joss.02373)
[![Repository](https://img.shields.io/badge/Zenodo-10.5281/zenodo.4912611-blue)](https://doi.org/10.5281/zenodo.4912611)

MetalWalls (MW) is a molecular dynamics code dedicated to the modelling of electrochemical systems. Its main originality is the inclusion of a series of methods allowing to apply a constant potential within the electrode materials.

**Extended documentation is provided in the [WIKI](https://gitlab.com/ampere2/metalwalls/-/wikis/home) section of the gitlab project.**
Details of the implemented force fields, thermodynamic ensembles and models, a description of the installation process and of input and output files, and guidelines for developers are given in detail.

In the following we reproduce installation instructions. To report bugs or contact the developpers, please raise an issue on the [Gitlab page](https://gitlab.com/ampere2/metalwalls/-/wikis/home).

# Reference

[A. Marin-Laflche, M. Haefele, L. Scalfi, A. Coretti, T. Dufils, G. Jeanmairet, S. Reed, A. Serva, R. Berthin, C. Bacon, S. Bonella, B. Rotenberg, P.A. Madden, and M. Salanne. MetalWalls: A Classical Molecular Dynamics Software Dedicated to the Simulation of Electrochemical Systems. Journal of Open Source Software, 5, 2373, DOI:10.21105/joss.02373 (2020)](https://dx.doi.org/10.21105/joss.02373)

# Compiling

MW requires a Fortran compiler and the LAPACK library. The installation is based on a Makefile. A few machine dependent variables must be defined in the file *./config.mk* prior to invoking the `make` utility. The structure of the *config.mk* file is as follow:

```make
# Compilation options
F90 := fortran-compiler
F90FLAGS := compilation-flags
FPPFLAGS := preprocessor-flags
LDFLAGS := linker-flags
F2PY := path-to-f2py
F90WRAP := path-to-f90wrap
FCOMPILER := f2py-option (intel, intelem, gnu95...)
J := flag-to-specify-modfiles-output-dir (gnu: -J, intel: -module )

# Path to pFUnit (Unit testing Framework) -- optional
PFUNIT := path-to-pfunit
```

Some examples are provided for common use cases in the *./computers/* directory.
For example to compile MW on a Linux machine with the GNU compiler one can use the following parameters.
Here we assume that the MPI compiler wrapper is in the PATH of the user.

```make
# Compilation options
F90 := mpif90
F90FLAGS := -O2 -g
FPPFLAGS := -cpp
LDFLAGS := -llapack
F2PY := f2py
F90WRAP := f90wrap
FCOMPILER := gnu95
J := -J
# Path to pFUnit (Unit testing Framework)
PFUNIT := /opt/pfunit/pfunit-parallel
```

On a typical cluster with Intel Skylake processors and with Intel compiler
```make
# Compilation options
F90 := mpiifort
F90STDFLAGS := -g
F90OPTFLAGS := -O2 -xCORE-AVX512 -align array64byte
F90REPORTFLAGS :=
F90FLAGS := $(F90STDFLAGS) $(F90OPTFLAGS) $(F90REPORTFLAGS)
FPPFLAGS := -fpp
LDFLAGS := -mkl=cluster
F2PY := f2py
F90WRAP := f90wrap
FCOMPILER := intelem
J := -module
# Path to pFUnit (Unit testing Framework)
PFUNIT := $(ALL_CCCHOME)/opt/pfunit/pfunit-parallel
```

On a typical cluster with Nvidia V100 GPUs and pgfortran compiler
```make
# Compilation options
F90 := pgfortran
F90FLAGS := -fast -Mvect -m64 -Minfo=ccff -Mpreprocess -g
F90FLAGS := -tp=px -Minfo=ccff -Mpreprocess
F90FLAGS += -acc -ta=tesla:managed -Minline
F90FLAGS += -Minfo=accel,inline
FPPFLAGS := -DMW_SERIAL
LDFLAGS := -llapack -lblas
J := -module
```
**Warning** Continuous integration tests are not run on GPUs so you should always check the calculation on a CPU architecture before starting production.

Some internal flags can be defined in `F90FLAGS` to activate certain features:

- `-DMW_USE_PLUMED` to compile with the Plumed library
- `-DMW_SERIAL` to compile in serial mode
- `-DMW_CI` to allow unit tests

Note that on GPUs `-DMW_SERIAL` is enforced since the parallelization is made with OpenACC.

The Makefile is located in the root directory. The command to compile the code is simply `make`. It will create the object files in a dedicated build directory and produce the *mw* executable in the root directory.


# PLUMED

MW can be run with the PLUMED biased-MD library. To achieve this use the option `-DMW_USE_PLUMED` to be specified in the `FPPFLAGS` of the *./config.mk* file. PLUMED should be installed and compiled externally to MW, using the procedure on the [PLUMED website](https://www.plumed.org/). To then link PLUMED to MW, from the root directory of the code `./` type:
```bash
plumed patch --new mw2
plumed patch --patch --shared --engine mw2
```
See the `./example/plumed/` directory for an example of a MW run coupled to PLUMED.


# Python interface

It is possible (but not necessary) to compile MW as a python library using **f2py** and **f90wrap**. For this, you should define the variables F2PY, F90WRAP and FCOMPILER in the *./config.mk*, add the flag `-fPIC` to the `F90FLAGS` and use the command `make python`. Please be aware that this compiler flag may cause a decrease in the performance. For more information, see the [python interface page](https://gitlab.com/ampere2/metalwalls/-/wikis/python-interface).


# Testing
Two test suites are available in the *./tests/* folder, one that includes unit tests to check individual subroutines and another regression tests that run the code as a whole. Both suites are independent and can be run separately.
For user purposes, we recommend to only use regression tests.

## Regression tests

To run the regression test suite, you will need a working python interpreter with the **numpy** package installed.

Regression tests are reference test cases against which the code is compared.
To run the regression tests, type the command in the *./tests/* directory:

```bash
python regression_tests.py
```

To run a reduced version of the tests one can use:

```bash
python regression_tests.py -r
```

To only run a subset of the tests one can use:

```bash
python regression_tests.py -s 
```

To run tests that use the python interface, one can specify the path to the python executable using:

```bash
python regression_tests.py -s python_interface -py path-to-python
```

The various tests subsets are:
* *nist*: energy comparison for the NIST validation case
* *benchmark*: forces, energies and charges comparison with LAMMPS for several systems
* *tosi_fumi*: comparison of forces, energies and stress tensor with PIM results for a NaCl system
* *pim*: comparison with PIM results
* *aim*: comparison with PIMAIM results
* *dihedrals*: comparison with reference data
* *matrix_inversion*: comparison of matrix, forces, energies, dipoles and charges with reference data
* *maze*: comparison of dipoles and electrodes calculated with MaZe method and with the conjugate gradient and matrix inversion method  
* *charge_neutrality*: comparison of charge calculation with conjugate gradient with symmetric and asymmetric potential difference
* *non_neutral*: comparison of matrix, forces, energies and charges with reference data for non neutral electrolyte
* *plumed*: plumed test
* *external_field*: comparison with reference data
* *thomas_fermi*: comparison with reference data
* *python_interface*: *nist* test case run with the python interface
* *steele*: comparison with reference data
* *piston*: comparison with reference data
* *four_site_model*: comparison with reference data
* *efg*: comparison with reference data
* *unwrap*: comparison with reference data
* *dip_plus_elec*: comparison of simultaneous calculation of dipoles and electrodes using conjugate gradient with reference data
* *dump_per_species*: comparison with reference data

## Unit tests

Unit tests aim at validating individual components of the code, we stress that they are used to monitor the code by developpers but are recommended for normal users.

In order to properly run the unit test suite you will need to install pFunit (the unit test framework).
To install pFUnit, download the version 3.3 from the [project page on Github](https://github.com/Goddard-Fortran-Ecosystem/pFUnit/releases/tag/3.3.3)
and follow the installation instructions in the *README.md* which we briefly reproduce here for a *bash* example. It is necessary to compile the MPI enabled version of pFUnit.

```bash
wget https://github.com/Goddard-Fortran-Ecosystem/pFUnit/archive/3.3.3.tar.gz
tar -zxvf 3.3.3.tar.gz
cd pFUnit-3.3.3/
export F90=gfortran
export F90_VENDOR=GNU
export MPIF90=mpif90
make tests MPI=YES
make install INSTALL_DIR=/path-to-pfunit/pfunit-parallel
```

To run the tests, MW has to be compiled with the compiling option `-DMW_CI`, to be specified in the *config.mk* file.
To launch the tests, do

```bash
make
make check
```
which will compile and run the unit tests. The `make check` command is a shortcut for

```bash
make mw_tests
cd tests/pFUnit
../../mw_tests
```

On batch processing system it is required to build the *mw_tests*
executable and create the appropriate job submission script.

# Running MW

## Running

Running a MW simulation requires two input files: a configuration file, *runtime.inpt*, and a data file, *data.inpt*.
Their format is described in the [system configuration page](https://gitlab.com/ampere2/metalwalls/-/wikis/system-configuration) and the [data input file page](https://gitlab.com/ampere2/metalwalls/-/wikis/data-input-file-format).
In a folder with the *data.inpt* and *runtime.inpt* files, run the executable with an MPI wrapper, for example:

```bash
mpirun -np 4 ./mw
```
Different flags can tune the executable behavior:

```bash
  -h, --help                 show a help message
  -v, --version              show program version number and exit
      --output-rank=VALUE    enables output on some of the ranks
                             possible VALUE are:
                               root     - only rank 0 performs output (default)
                               all      - all mpi processes perform output
                               r1[,r2]* - comma separated list of rank ids which perform output
```

## Restarting

The MW-generated restart files have the same format as the data files so one only has to rename them *data.inpt* and run the simulation again.
Be careful to the velocity creation keyword in the system configuration file *runtime.inpt*.

If you are running simulations using the Mass-Zero method (*maze*) with matrix inversion, the computed matrix can be given as input *maze_matrix.inpt*. Similarly, if you are using the *matrix_inversion* algorithm, the computed matrix can be given as input *hessian_matrix.inpt*. When restarting the simulation, these matrices will be read instead of computed from scratch.

# Funding

The development of MW has received support from:
* [EoCoE](http://www.eocoe.eu), a project funded by the European Union Contracts No. H2020-EINFRA-2015-1-676629 and H2020-INFRAEDI-2018-824158.
* European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (Grant Agreement No. 771294).
* French National Research Agency (Labex STORE-EX, Grant No. ANR-10-LABX-0076).

Owner

  • Name: AMPERE
  • Login: ampere2
  • Kind: organization

JOSS Publication

MetalWalls: A classical molecular dynamics software dedicated to the simulation of electrochemical systems
Published
September 25, 2020
Volume 5, Issue 53, Page 2373
Authors
Abel Marin-Laflèche
Maison de la Simulation, CEA, CNRS, Univ. Paris-Sud, UVSQ, Université Paris-Saclay, F-91191 Gif-sur-Yvette, France
Matthieu Haefele
Maison de la Simulation, CEA, CNRS, Univ. Paris-Sud, UVSQ, Université Paris-Saclay, F-91191 Gif-sur-Yvette, France
Laura Scalfi
Sorbonne Université, CNRS, Physico-chimie des Électrolytes et Nanosystèmes Interfaciaux, PHENIX, F-75005 Paris, France
Alessandro Coretti
Department of Mathematical Sciences, Politecnico di Torino, I-10129 Torino, Italy, Centre Européen de Calcul Atomique et Moléculaire (CECAM), École Polytechnique Fédérale de Lausanne, 1015 Lausanne, Switzerland
Thomas Dufils
Sorbonne Université, CNRS, Physico-chimie des Électrolytes et Nanosystèmes Interfaciaux, PHENIX, F-75005 Paris, France
Guillaume Jeanmairet
Sorbonne Université, CNRS, Physico-chimie des Électrolytes et Nanosystèmes Interfaciaux, PHENIX, F-75005 Paris, France
Stewart K. Reed
School of Chemistry, University of Leeds, Leeds, UK
Alessandra Serva
Sorbonne Université, CNRS, Physico-chimie des Électrolytes et Nanosystèmes Interfaciaux, PHENIX, F-75005 Paris, France
Roxanne Berthin
Sorbonne Université, CNRS, Physico-chimie des Électrolytes et Nanosystèmes Interfaciaux, PHENIX, F-75005 Paris, France
Camille Bacon
Sorbonne Université, CNRS, Physico-chimie des Électrolytes et Nanosystèmes Interfaciaux, PHENIX, F-75005 Paris, France
Sara Bonella
Centre Européen de Calcul Atomique et Moléculaire (CECAM), École Polytechnique Fédérale de Lausanne, 1015 Lausanne, Switzerland
Benjamin Rotenberg
Sorbonne Université, CNRS, Physico-chimie des Électrolytes et Nanosystèmes Interfaciaux, PHENIX, F-75005 Paris, France
Paul A. Madden
Department of Materials, University of Oxford, Oxford, UK
Mathieu Salanne
Maison de la Simulation, CEA, CNRS, Univ. Paris-Sud, UVSQ, Université Paris-Saclay, F-91191 Gif-sur-Yvette, France, Sorbonne Université, CNRS, Physico-chimie des Électrolytes et Nanosystèmes Interfaciaux, PHENIX, F-75005 Paris, France
Editor
Richard Gowers ORCID
Tags
Fortran molecular dynamics electrodes ionic liquids

Committers

Last synced: 4 months ago

All Time
  • Total Commits: 1,343
  • Total Committers: 41
  • Avg Commits per committer: 32.756
  • Development Distribution Score (DDS): 0.754
Past Year
  • Commits: 2
  • Committers: 2
  • Avg Commits per committer: 1.0
  • Development Distribution Score (DDS): 0.5
Top Committers
Name Email Commits
Abel Marin-Lafleche a****e@m****r 330
Alessandro Coretti a****i@p****t 320
Laura l****i@u****r 283
Matthieu Haefele m****e@m****r 65
dufils t****s@u****r 55
Mathieu Salanne s****e@M****l 49
Mathieu Salanne m****e@u****r 43
iurii i****k@s****r 28
FedericaA f****i@g****m 27
Mathieu Salanne s****e@m****e 26
salanne u****z@j****r 13
salanne u****z@j****r 12
Laura Scalfi PHENIX s****i@o****l 10
camille.bacon@sorbonne-universite.fr c****n@s****r 8
salanne u****z@j****r 8
Mathieu Salanne s****e@M****l 7
Mathieu Salanne s****e@M****e 7
Matthieu Haefele m****e@u****r 7
salanne u****z@j****r 7
Iurii Chubak g****k@g****m 6
Mathieu Salanne s****e@o****l 5
salanne u****z@j****r 5
Mathieu Salanne m****e@s****r 3
Abel Marin-Lafleche a****e@c****r 2
Alessandro Coretti a****i@e****h 1
Laura Scalfi PHENIX s****i@o****l 1
Mathieu Salanne s****e@c****r 1
Mathieu Salanne s****e@c****r 1
Mathieu Salanne s****e@c****r 1
Mathieu Salanne s****e@m****e 1
and 11 more...

Issues and Pull Requests

Last synced: 4 months ago