l2hmc-qcd
Application of the L2HMC algorithm to simulations in lattice QCD.
Science Score: 64.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
✓Committers with academic emails
3 of 5 committers (60.0%) from academic institutions -
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (9.1%) to scientific vocabulary
Keywords
Repository
Application of the L2HMC algorithm to simulations in lattice QCD.
Basic Info
- Host: GitHub
- Owner: saforem2
- License: apache-2.0
- Language: Jupyter Notebook
- Default Branch: main
- Homepage: https://saforem2.github.io/l2hmc-qcd/
- Size: 874 MB
Statistics
- Stars: 67
- Watchers: 5
- Forks: 9
- Open Issues: 0
- Releases: 4
Topics
Metadata Files
README.md
Contents
- [Overview](#overview) * [Papers 📚, Slides 📊, etc.](https://github.com/saforem2/l2hmc-qcd/#training--experimenting) * [Background](#background) - [Installation](#installation) - [Training](#training) - [Configuration Management](#configuration-management) - [Running @ ALCF](#running-at-ALCF) - [Details](#details) * [Organization](#organization) + [Dynamics / Network](#dynamics---network) - [Network Architecture](#network-architecture) + [Lattice](#lattice)Overview
Papers 📚, Slides 📊 etc.
- 📊 Slides (07/31/2023 @ Lattice 2023)
📝 Papers:
- LeapfrogLayers: A Trainable Framework for Effective Topological Sampling, 2022
- Accelerated Sampling Techniques for Lattice Gauge Theory @ BNL & RBRC: DWQ @ 25 (12/2021)
- Training Topological Samplers for Lattice Gauge Theory from the ML for HEP, on and off the Lattice @ $\mathrm{ECT}^{*}$ Trento (09/2021) (+ 📊 slides)
- Deep Learning Hamiltonian Monte Carlo @ Deep Learning for Simulation (SimDL) Workshop ICLR 2021
- 📚 : arXiv:2105.03418
- 📊 : poster
- 📚 : arXiv:2105.03418
- LeapfrogLayers: A Trainable Framework for Effective Topological Sampling, 2022
Background
The L2HMC algorithm aims to improve upon HMC by optimizing a carefully chosen loss function which is designed to minimize autocorrelations within the Markov Chain, thereby improving the efficiency of the sampler.
A detailed description of the original L2HMC algorithm can be found in the paper:
Generalizing Hamiltonian Monte Carlo with Neural Network
with implementation available at brain-research/l2hmc/ by Daniel Levy, Matt D. Hoffman and Jascha Sohl-Dickstein.
Broadly, given an analytically described target distribution, π(x), L2HMC provides a statistically exact sampler that:
- Quickly converges to the target distribution (fast burn-in).
- Quickly produces uncorrelated samples (fast mixing).
- Is able to efficiently mix between energy levels.
- Is capable of traversing low-density zones to mix between modes (often difficult for generic HMC).
Installation
Warning
It is recommended to install inside an existing virtual environment
(ideally one withtensorflow, pytorch [horovod,deepspeed]already installed)
From source (RECOMMENDED)
```Shell git clone https://github.com/saforem2/l2hmc-qcd cd l2hmc-qcd # for development addons: # python3 -m pip install -e ".[dev]" python3 -m pip install -e . ```
From
l2hmc on PyPI
```Shell python3 -m pip install l2hmc ```
Test install:
Shell
python3 -c 'import l2hmc ; print(l2hmc.__file__)'
/path/to/l2hmc-qcd/src/l2hmc/__init__.py
Training
Configuration Management
This project uses hydra for configuration management and
supports distributed training for both PyTorch and TensorFlow.
In particular, we support the following combinations of framework + backend for distributed training:
- TensorFlow (+ Horovod for distributed training)
- PyTorch +
- DDP
- Horovod
- DeepSpeed
The main entry point is src/l2hmc/main.py,
which contains the logic for running an end-to-end Experiment.
An Experiment consists of the following sub-tasks:
- Training
- Evaluation
- HMC (for comparison and to measure model improvement)
All configuration options can be dynamically overridden via the CLI at runtime,
and we can specify our desired framework and backend combination via:
Shell
python3 main.py mode=debug framework=pytorch backend=deepspeed precision=fp16
to run a (non-distributed) Experiment with pytorch + deepspeed with fp16 precision.
The l2hmc/conf/config.yaml contains a brief
explanation of each of the various parameter options, and values can be
overriden either by modifying the config.yaml file, or directly through the
command line, e.g.
Shell
cd src/l2hmc
./train.sh mode=debug framework=pytorch > train.log 2>&1 &
tail -f train.log $(tail -1 logs/latest)
Additional information about various configuration options can be found in:
src/l2hmc/configs.py: Contains implementations of the (concrete python objects) that are adjustable for our experiment.src/l2hmc/conf/config.yaml: Starting point with default configuration options for a genericExperiment.
for more information on how this works I encourage you to read Hydra's Documentation Page.
Running at ALCF
For running with distributed training on ALCF systems, we provide a complete
src/l2hmc/train.sh
script which should run without issues on either Polaris or ThetaGPU @ ALCF.
Details
Goal: Use L2HMC to efficiently generate gauge configurations for calculating observables in lattice QCD.
A detailed description of the (ongoing) work to apply this algorithm to simulations in lattice QCD (specifically, a 2D U(1) lattice gauge theory model) can be found in arXiv:2105.03418.
Organization
Dynamics / Network
For a given target distribution, π(x), the Dynamics object
(src/l2hmc/dynamics/) implements methods for generating
proposal configurations (x' ~ π) using the generalized leapfrog update.
This generalized leapfrog update takes as input a buffer of lattice
configurations x and generates a proposal configuration x' = Dynamics(x) by
evolving generalized L2HMC dynamics.
Network Architecture
An illustration of the leapfrog layer updating (x, v) --> (x', v') can be seen below.
Contact
Code author: Sam Foreman
Pull requests and issues should be directed to: saforem2
Citation
If you use this code or found this work interesting, please cite our work along with the original paper:
bibtex
@misc{foreman2021deep,
title={Deep Learning Hamiltonian Monte Carlo},
author={Sam Foreman and Xiao-Yong Jin and James C. Osborn},
year={2021},
eprint={2105.03418},
archivePrefix={arXiv},
primaryClass={hep-lat}
}
bibtex
@article{levy2017generalizing,
title={Generalizing Hamiltonian Monte Carlo with Neural Networks},
author={Levy, Daniel and Hoffman, Matthew D. and Sohl-Dickstein, Jascha},
journal={arXiv preprint arXiv:1711.09268},
year={2017}
}
Acknowledgement
Note
This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under contract DE_AC02-06CH11357.
This work describes objective technical results and analysis.
Any subjective views or opinions that might be expressed in the work do not necessarily represent the views of the U.S. DOE or the United States Government.
Owner
- Name: Sam Foreman
- Login: saforem2
- Kind: user
- Location: Chicago, IL
- Company: @argonne-lcf
- Website: https://samforeman.me
- Twitter: saforem2
- Repositories: 234
- Profile: https://github.com/saforem2
AI 4 Science @argonne-lcf https://samforeman.me
Citation (CITATION.cff)
# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!
cff-version: 1.2.0
title: l2hmc-qcd
message: >-
If you use this software, please cite it using the
metadata from this file.
type: software
authors:
- given-names: Sam
family-names: Foreman
email: foremans@anl.gov
affiliation: Argonne National Laboratory
orcid: 'https://orcid.org/0000-0002-9981-0876'
repository-code: 'https://github.com/saforem2/l2hmc-qcd'
keywords:
- machine learning
- normalizing flows
- lattice qcd
- lattice gauge theory
- markov chain monte carlo
- hamiltonian monte carlo
license: Apache-2.0
GitHub Events
Total
- Watch event: 3
- Fork event: 1
Last Year
- Watch event: 3
- Fork event: 1
Committers
Last synced: about 1 year ago
Top Committers
| Name | Commits | |
|---|---|---|
| Sam Foreman | s****2@g****m | 6,129 |
| Sam Foreman | f****s@c****v | 4 |
| Ed Bennett | e****t@s****k | 2 |
| Sourcery AI | 2 | |
| Sam Foreman | f****s@i****v | 2 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 2
- Total pull requests: 99
- Average time to close issues: over 1 year
- Average time to close pull requests: 3 days
- Total issue authors: 1
- Total pull request authors: 3
- Average comments per issue: 4.5
- Average comments per pull request: 0.38
- Merged pull requests: 93
- Bot issues: 0
- Bot pull requests: 7
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- matthewfeickert (2)
Pull Request Authors
- saforem2 (80)
- sourcery-ai[bot] (4)
- edbennett (1)
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- euporie ^1.4.3 develop
- ipykernel ^6.12.1 develop
- ipython ^8.2.0 develop
- notebook ^6.4.10 develop
- ptipython ^1.0.1 develop
- accelerate ^0.6.2
- arviz ^0.12.0
- bokeh ^2.4.2
- celerite ^0.4.2
- h5py ^3.6.0
- horovod ^0.24.2
- hydra-colorlog ^1.1.0
- hydra-core ^1.1.1
- ipython ^8.2.0
- joblib ^1.1.0
- matplotlib ^3.5.1
- matplotx ^0.3.6
- mpi4py ^3.1.3
- neovim ^0.3.1
- nodejs ^0.1.1
- numpy ^1.22.3
- pynvim ^0.4.3
- pyright ^1.1.235
- python ^3.10,<3.11
- rich ^12.1.0
- seaborn ^0.11.2
- tensorflow ^2.8.0
- torch ^1.11.0
- wandb ^0.12.11
- xarray ^2022.3.0