iharm3D
iharm3D: Vectorized General Relativistic Magnetohydrodynamics - Published in JOSS (2021)
Science Score: 98.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 12 DOI reference(s) in README and JOSS metadata -
✓Academic publication links
Links to: joss.theoj.org -
✓Committers with academic emails
15 of 17 committers (88.2%) from academic institutions -
✓Institutional organization owner
Organization afd-illinois has institutional domain (rainman.astro.illinois.edu) -
✓JOSS paper metadata
Published in Journal of Open Source Software
Scientific Fields
Repository
Hybrid MPI/OpenMP 3D HARM with vectorization
Basic Info
Statistics
- Stars: 15
- Watchers: 13
- Forks: 12
- Open Issues: 10
- Releases: 1
Metadata Files
README.md
iharm3D
This code implements the HARM algorithm outlined in Gammie et al. 2003, with some modifications outlined in McKinney & Gammie 2004. This is a second-order, conservative, shock-capturing scheme for general-relativistic magnetohydrodynamics (GRMHD). Credit also to the many people who have worked on the code over the years, including Scott Noble, who implemented the first 3D version of the code, Josh Dolence, Ben Ryan, George Wong, and Ben Prather.
Requirements
iharm3D requires an MPI/Parallel HDF5 stack. In practice, this means that the executable h5pcc must be in your PATH
and working correctly. It also requires the GNU Scientific Library (GSL) to be installed or loaded.
Most Linux distributions package these requirements, and most supercomputers have modules for them.
Building
Provided the above is met, iharm3D can be built with
bash
$ make
This builds the 'torus' problem, prob/torus, which sets up the initial conditions in the equilibrium torus configuration of
Fishbone & Moncrief 1976. This is by far the most common problem used for science runs.
Alternative (mostly testing) problems can be built by specifying their folder name in prob/, e.g.
bash
$ make PROB=mhdmodes
Refer to existing problems and/or forthcoming developer documentation for details on how to add new problem definitions.
The make process or flags can be customized by adding a host-specific makefile
bash
$ touch machines/$(hostname).make
which can contain any valid make script and is read after setting most of the default parameters, in order to override them.
Configuration and Running
Building iharm3d produces a directory named build_archive in the directory where make is invoked. This archive contains
all the source files used in the build, as well as all the object files and a copy of the executable.
If build_archive already exists, make will prefer any newer/modified files in that directory, vs their equivalents in the original source.
This allows modifying the compile-time parameters in parameters.h, or even modifying the C code as desired, without disrupting the
original repository and potentially committing upstream whatever compile-time or runtime configuration you happen to be using.
Note that iharm3d also takes runtime parameters (most of the physical parameters, whereas grid size & MPI topology are compile-time).
iharm3d will automatically use any file called param.dat in the current working directory, and will output simulation data to the
working directory as well. You can specify an alternative parameter file with -p or output directory with -o. Sample runtime
parameters for each problem are provided in the problem directories.
Due to this extra copy, note that between building different problems (e.g. from a torus to the MHD modes problem) one must run
bash
$ make distclean
which will remove the build_archive directory, including any customizations that had previously been applied. A simple make clean will
remove just the object and executable files, preserving any customizations in build_archive
Full details of production runs on larger machines e.g. Stampede2 are in script/submit/checklist.txt in this repository, along
with job submission scripts for SLURM in the TACC environment, adaptable for a lot of SLURM machines.
Running a Fishbone-Moncrief torus
The Fishbone-Moncrief(FM) torus is the ubiquitous initial condition for modelling compact radio sources such as M87* and SgrA*. The FM problem can be simulated on iharm3d by passing the command line argumentPROB=torus, while making the program, and specifying problem-specific parameters (compile-time and run-time) in parameters.h (in the build archive) and an additional parameter file which, by default iharm3d assumes to be named param.dat. Presupposing that all the necessary dependencies (eg: OpenMP, MPI, phdf5, GSL) are installed and the directory variables and flags in makefile are pointing to them correctly, the following steps outline the commands to compile and execute the problem:
- Invoke the make command from the output directory,
bash
$ make -f IHARM3D_DIRECTORY/makefile PROB=torus
where IHARM3DDIRECTORY is the path to your local iharm3d repository that contains the makefile. The output directory is where, as explained in the section above, the harm executable is created along with the `buildarchive.build_archivecontains the source files necessary to runiharm3dalong with the problem-specific compile-time parameter file,parameters.hand problem initialization file,problem.c`.
Modify compile-time parameters in
build_archive/parameters.h. These typically include (i) the grid sizeNiTOT; (ii) number of MPI ranksNiCPU; (iii) density and internal energy floors:BSQORHOMAX,UORHOMAX,BSQOUMAX; (iv) the reconstruction schemeRECONSTRUCTION. NOTE: If you're runningiharm3don your local system, it is recommended that the FM problem is run at a low resolution or a 2D problem is executed (setN3TOTto 1). Note that a strict minimum is placed onN1TOTbased on the domain size, usually ~90 grid zones or greater, as simulations can become unstable when too few zones are placed within the event horizon of the central black hole.If the compile-time parameters have been modified or the C code in any of the source files in
build_archivehas been edited, the harm executable must be remade with the same command as in (1) from the output directory.Copy any of the parameter files located at
IHARM3D_DIRECTORY/prob/torus/labelled paramsane.dat or parammad.dat to the output directory and rename the file asparam.dat. This contains the runtime parameters for the FM torus (eg: duration of run, domain size, output file cadence, fluid properties, FM torus size, FMKS grid geometry). NOTE: It is again recommended to settfto a reasonable value if you're running the problem on your local computer.Submitting the run: Once the runtime parameters have been updated, you're good to run the FM problem. The command to launch the run depends on the capabilities of your system, (i) If you're executing the problem on a single-node system, you do not need the MPI dependency and following command should suffice (run from output directory),
bash
$ ./harm -p param.dat >LOG_FILE
where the runtime log is redirected to LOG_FILE. If STDOUT is not redirected, the runtime log will be printed on the terminal. NOTE: You can set the number of cores over which you want iharm3d to execute by modifying the environment variable, OMP_NUM_THREADS during pre-compilation. If not provided, the problem by default will be run across all cores available.
(ii) If you're running the problem on a multi-node system, you can utilize iharm3d's MPI functionality to parallelize the job across several nodes. The exact command to launch harm depends on the MPI implementation. If you are running iharm3d on a TACC system (which has the SLURM job scheduler), you may find the various job submission scripts located at IHARM3D_DIRECTORY/scripts/submit useful. You can submit the job on any TACC machine as,
bash
$sbatch -N (NODES) -p (QUEUE) IHARM3D_DIRECTORY/scripts/submit/SUBMIT_SCRIPT.sb
where SUBMIT_SCRIPT.sb is the job submission script that varies in accordance with the TACC system you're logged into.
Basic plots
Having run the desired problem, one can use the basic_analysis.py script at scripts/analysis/simple to generate simple plots. To do this,
- Update
params_analysis.datinscripts/analysis/simpleto match your problem. NOTE:DUMPSDIRmust be a path to the dump files andPLOTSDIRmust be a path to the directory where you wish to save the plots. - Run
basic_analysis.pyas,
bash
$python3 script/analysis/simple/basic_analysis.py -p script/analysis/simple/params_analysis.dat
The script by default parallelizes the analysis by using python's multiprocessing module. You can get around this by setting nthreads to 1 in main. For the 3D torus problem, it plots the density and plasma beta-inverse (magnetic pressure/gas pressure) in the XZ (poloidal) and XY (toroidal) plane. It overlays the poloidal density plot with magnetic field lines. For the 2D torus problem, it generates similar poloidal plots. If you're using the script on the output of a bondi problem, it will generate the poloidal density plot. Note that the bondi problem in iharm3d is unmagnetized and it wouldn't make sense to plot plasma beta-inverse. Finally, the script plots the density in XZ and XY plane for the mhdmodes problem.
We hope that this script sheds some light on the way data is stored in the dump files and grid file (a more detailed summary can be found here and here), and acts as a primer for the calculations performed to compute various qunatities of interest, and generate simple plots.
If you're looking for a more complete set of scripts that calculates and plots a near-exhaustive list of relevant GRMHD diagnostics, have a look at pyHARM.
Hacking
Notes that may save you time in reading the source code:
* Grid coordinates match physical coordinates i => x^1, j => x^2, k => x^3. However, they are indexed backward
in memory grid[k][j][i]. A number of loop aliases e.g. ILOOP, ZLOOP are defined to make this counter-intuitive ordering,
as well as the presence of border "ghost" zones, easier to manage
* The fluid state S is often modified in-place. Rest assured the accompanying grid G is not. Both are structs of arrays,
given typedefs in order to allocate their backing memory contiguously
* Comments are sparse, and usually concern implementation details, not algorithmic operation. See
iharm2d_v3 for a simpler version which may prove a gentler introduction.
Help & Contributing
Qustions and suggestions for the code and/or documentation are welcome. If you run into problems, have questions, or would like to see a feature, we recommend raising an issue here.
We welcome collaboration from anyone interested in these problems or in contributing to the code. Feel free to get in touch either through GitHub by opening pull requests and forks, or directly to the developers via email.
Owner
- Name: AFD Group at UIUC
- Login: AFD-Illinois
- Kind: organization
- Email: gammie@illinois.edu
- Website: http://rainman.astro.illinois.edu
- Repositories: 19
- Profile: https://github.com/AFD-Illinois
Don't use a sledgehammer to crack a nut.
JOSS Publication
iharm3D: Vectorized General Relativistic Magnetohydrodynamics
Authors
Physics Department, University of Illinois at Urbana--Champaign, 1110 West Green Street, Urbana, IL 61801, USA, Illinois Center for Advanced Studies of the Universe
Physics Department, University of Illinois at Urbana--Champaign, 1110 West Green Street, Urbana, IL 61801, USA, Illinois Center for Advanced Studies of the Universe
Physics Department, University of Illinois at Urbana--Champaign, 1110 West Green Street, Urbana, IL 61801, USA, Illinois Center for Advanced Studies of the Universe
Tags
magnetohydrodynamics general relativity astronomy dynamics galactic dynamics milky wayGitHub Events
Total
- Issues event: 2
- Watch event: 4
- Issue comment event: 3
- Pull request event: 2
- Fork event: 3
Last Year
- Issues event: 2
- Watch event: 4
- Issue comment event: 3
- Pull request event: 2
- Fork event: 3
Committers
Last synced: 7 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Ben Prather | b****2@i****u | 539 |
| Vedant Dhruv | 4****6 | 13 |
| Vedant Dhruv | v****2@b****u | 10 |
| George Wong | g****2@i****u | 9 |
| Ben Ryan | b****0@i****u | 7 |
| Josh Dolence | j****e@g****m | 6 |
| Charles F Gammie | g****e@i****u | 4 |
| Ben Prather | b****r@l****v | 2 |
| jdolence | j****e@b****v | 2 |
| Ben Ryan | b****n@l****u | 2 |
| Ben Ryan | b****n@l****u | 2 |
| Ben Ryan | b****0@l****u | 1 |
| Ben | b****r@t****v | 1 |
| Cesar Diaz | c****2@b****u | 1 |
| Mani Chandra | m****c@i****u | 1 |
| Vedant Dhruv | v****2@b****u | 1 |
| e-petersen | e****2@i****u | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 20
- Total pull requests: 28
- Average time to close issues: 6 months
- Average time to close pull requests: about 1 month
- Total issue authors: 8
- Total pull request authors: 6
- Average comments per issue: 1.35
- Average comments per pull request: 0.5
- Merged pull requests: 21
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 2
- Pull requests: 1
- Average time to close issues: N/A
- Average time to close pull requests: less than a minute
- Issue authors: 2
- Pull request authors: 1
- Average comments per issue: 1.5
- Average comments per pull request: 0.0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- gnwong (11)
- bprather (3)
- cfgammie (1)
- cpalenzuela (1)
- bgiacoma (1)
- mraianeto (1)
- marioraianeto (1)
- farrukh63 (1)
Pull Request Authors
- vedantdhruv96 (13)
- bprather (11)
- Relativist1 (2)
- brryan (1)
- blancocd (1)
- gnwong (1)