Abmarl
Abmarl: Connecting Agent-Based Simulations with Multi-Agent Reinforcement Learning - Published in JOSS (2021)
Science Score: 98.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 7 DOI reference(s) in README and JOSS metadata -
✓Academic publication links
Links to: joss.theoj.org -
✓Committers with academic emails
4 of 6 committers (66.7%) from academic institutions -
✓Institutional organization owner
Organization llnl has institutional domain (software.llnl.gov) -
✓JOSS paper metadata
Published in Journal of Open Source Software
Keywords
Repository
Agent Based Modeling and Reinforcement Learning
Basic Info
Statistics
- Stars: 71
- Watchers: 6
- Forks: 19
- Open Issues: 67
- Releases: 0
Topics
Metadata Files
README.md
Abmarl
Abmarl is a package for developing Agent-Based Simulations and training them with MultiAgent Reinforcement Learning (MARL). We provide an intuitive command line interface for engaging with the full workflow of MARL experimentation: training, visualizing, and analyzing agent behavior. We define an Agent-Based Simulation Interface and Simulation Manager, which control which agents interact with the simulation at each step. We support integration with popular reinforcement learning simulation interfaces, including gym.Env, MultiAgentEnv, and OpenSpiel. We define our own GridWorld Simulation Framework for creating custom grid-based Agent Based Simulations.
Abmarl leverages RLlib’s framework for reinforcement learning and extends it to more easily support custom simulations, algorithms, and policies. We enable researchers to rapidly prototype MARL experiments and simulation design and lower the barrier for pre-existing projects to prototype RL as a potential solution.
Quickstart
To use Abmarl, install via pip: pip install abmarl
To develop Abmarl, clone the repository and install via pip's development mode.
git clone git@github.com:LLNL/Abmarl.git
cd abmarl
pip install -r requirements/requirements_all.txt
pip install -e . --no-deps
Train agents in a multicorridor simulation:
abmarl train examples/multi_corridor_example.py
Visualize trained behavior:
abmarl visualize ~/abmarl_results/MultiCorridor-2020-08-25_09-30/ -n 5 --record
Note: If you install with conda, then you must also include ffmpeg in your
virtual environment.
Documentation
You can find the latest Abmarl documentation on our ReadTheDocs page.
Community
Citation
Abmarl has been published to the Journal of Open Source Software (JOSS). It can be cited using the following bibtex entry:
@article{Rusu2021,
doi = {10.21105/joss.03424},
url = {https://doi.org/10.21105/joss.03424},
year = {2021},
publisher = {The Open Journal},
volume = {6},
number = {64},
pages = {3424},
author = {Edward Rusu and Ruben Glatt},
title = {Abmarl: Connecting Agent-Based Simulations with Multi-Agent Reinforcement Learning},
journal = {Journal of Open Source Software}
}
Reporting Issues
Please use our issue tracker to report any bugs or submit feature requests. Great bug reports tend to have: - A quick summary and/or background - Steps to reproduce, sample code is best. - What you expected would happen - What actually happens
Contributing
Please submit contributions via pull requests from a forked repository. Find out more about this process here. All contributions are under the BSD 3 License that covers the project.
Release
LLNL-CODE-815883
Owner
- Name: Lawrence Livermore National Laboratory
- Login: LLNL
- Kind: organization
- Email: github-admin@llnl.gov
- Location: Livermore, CA, USA
- Website: https://software.llnl.gov
- Twitter: LLNL_OpenSource
- Repositories: 520
- Profile: https://github.com/LLNL
For over 70 years, the Lawrence Livermore National Laboratory has applied science and technology to make the world a safer place.
JOSS Publication
Abmarl: Connecting Agent-Based Simulations with Multi-Agent Reinforcement Learning
Authors
Lawrence Livermore National Laboratory
Tags
agent-based simulation multi-agent reinforcement learning machine learning agent-based modelingGitHub Events
Total
- Watch event: 12
- Fork event: 2
Last Year
- Watch event: 12
- Fork event: 2
Committers
Last synced: 7 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Eddie Rusu | r****1@l****v | 1,630 |
| Daniel S. Katz | d****z@i****g | 2 |
| mojoee | 4****e | 1 |
| metal-oopa | s****4@g****m | 1 |
| glatt1 | g****1@l****v | 1 |
| Andrew Gillette | g****7@l****v | 1 |
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 156
- Total pull requests: 92
- Average time to close issues: 4 months
- Average time to close pull requests: about 23 hours
- Total issue authors: 3
- Total pull request authors: 3
- Average comments per issue: 0.86
- Average comments per pull request: 0.04
- Merged pull requests: 90
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- rusu24edward (142)
- aowen87 (8)
- a-vinod (1)
Pull Request Authors
- rusu24edward (105)
- gillette7 (2)
- metal-oopa (1)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 1
-
Total downloads:
- pypi 127 last-month
- Total dependent packages: 0
- Total dependent repositories: 0
- Total versions: 12
- Total maintainers: 1
pypi.org: abmarl
Agent Based Simulation and MultiAgent Reinforcement Learning
- Homepage: https://github.com/llnl/abmarl
- Documentation: https://abmarl.readthedocs.io/en/latest/index.html
- License: BSD 3
-
Latest release: 0.2.8
published almost 2 years ago
Rankings
Maintainers (1)
Dependencies
- actions/checkout v2 composite
- actions/setup-python v2 composite
- actions/checkout v2 composite
- actions/setup-python v2 composite
- actions/checkout v2 composite
- actions/setup-python v2 composite
- flake8 *
- gym <0.22
- importlib-metadata <5.0
- matplotlib *
- open-spiel *
- pytest *
- ray ==2.0.0
- seaborn *
- sphinx *
- sphinx-rtd-theme *
- tensorflow *
- gym <0.22
- importlib-metadata <5.0
- ray *
- tensorflow *
