stable-baselines3

PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.

https://github.com/dlr-rm/stable-baselines3

Science Score: 75.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 2 DOI reference(s) in README
  • Academic publication links
  • Committers with academic emails
    8 of 161 committers (5.0%) from academic institutions
  • Institutional organization owner
    Organization dlr-rm has institutional domain (rm.dlr.de)
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (16.1%) to scientific vocabulary

Keywords

baselines gsde gym machine-learning openai python pytorch reinforcement-learning reinforcement-learning-algorithms robotics sb3 sde stable-baselines toolbox

Keywords from Contributors

gymnasium multi-agent-reinforcement-learning jax multiagent-reinforcement-learning gym-environment transformers imitation-learning inverse-reinforcement-learning reward-learning autonomous-driving
Last synced: 4 months ago · JSON representation ·

Repository

PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.

Basic Info
Statistics
  • Stars: 11,426
  • Watchers: 66
  • Forks: 1,924
  • Open Issues: 79
  • Releases: 29
Topics
baselines gsde gym machine-learning openai python pytorch reinforcement-learning reinforcement-learning-algorithms robotics sb3 sde stable-baselines toolbox
Created over 5 years ago · Last pushed 4 months ago
Metadata Files
Readme Contributing License Code of conduct Citation

README.md

CI Documentation Status coverage report codestyle

Stable Baselines3

Stable Baselines3 (SB3) is a set of reliable implementations of reinforcement learning algorithms in PyTorch. It is the next major version of Stable Baselines.

You can read a detailed presentation of Stable Baselines3 in the v1.0 blog post or our JMLR paper.

These algorithms will make it easier for the research community and industry to replicate, refine, and identify new ideas, and will create good baselines to build projects on top of. We expect these tools will be used as a base around which new ideas can be added, and as a tool for comparing a new approach against existing ones. We also hope that the simplicity of these tools will allow beginners to experiment with a more advanced toolset, without being buried in implementation details.

Note: Despite its simplicity of use, Stable Baselines3 (SB3) assumes you have some knowledge about Reinforcement Learning (RL). You should not utilize this library without some practice. To that extent, we provide good resources in the documentation to get started with RL.

Main Features

The performance of each algorithm was tested (see Results section in their respective page), you can take a look at the issues #48 and #49 for more details.

We also provide detailed logs and reports on the OpenRL Benchmark platform.

| Features | Stable-Baselines3 | | --------------------------- | ----------------------| | State of the art RL methods | :heavycheckmark: | | Documentation | :heavycheckmark: | | Custom environments | :heavycheckmark: | | Custom policies | :heavycheckmark: | | Common interface | :heavycheckmark: | | Dict observation space support | :heavycheckmark: | | Ipython / Notebook friendly | :heavycheckmark: | | Tensorboard support | :heavycheckmark: | | PEP8 code style | :heavycheckmark: | | Custom callback | :heavycheckmark: | | High code coverage | :heavycheckmark: | | Type hints | :heavycheckmark: |

Planned features

Since most of the features from the original roadmap have been implemented, there are no major changes planned for SB3, it is now stable. If you want to contribute, you can search in the issues for the ones where help is welcomed and the other proposed enhancements.

While SB3 development is now focused on bug fixes and maintenance (doc update, user experience, ...), there is more active development going on in the associated repositories: - newer algorithms are regularly added to the SB3 Contrib repository - faster variants are developed in the SBX (SB3 + Jax) repository - the training framework for SB3, the RL Zoo, has an active roadmap

Migration guide: from Stable-Baselines (SB2) to Stable-Baselines3 (SB3)

A migration guide from SB2 to SB3 can be found in the documentation.

Documentation

Documentation is available online: https://stable-baselines3.readthedocs.io/

Integrations

Stable-Baselines3 has some integration with other libraries/services like Weights & Biases for experiment tracking or Hugging Face for storing/sharing trained models. You can find out more in the dedicated section of the documentation.

RL Baselines3 Zoo: A Training Framework for Stable Baselines3 Reinforcement Learning Agents

RL Baselines3 Zoo is a training framework for Reinforcement Learning (RL).

It provides scripts for training, evaluating agents, tuning hyperparameters, plotting results and recording videos.

In addition, it includes a collection of tuned hyperparameters for common environments and RL algorithms, and agents trained with those settings.

Goals of this repository:

  1. Provide a simple interface to train and enjoy RL agents
  2. Benchmark the different Reinforcement Learning algorithms
  3. Provide tuned hyperparameters for each environment and RL algorithm
  4. Have fun with the trained agents!

Github repo: https://github.com/DLR-RM/rl-baselines3-zoo

Documentation: https://rl-baselines3-zoo.readthedocs.io/en/master/

SB3-Contrib: Experimental RL Features

We implement experimental features in a separate contrib repository: SB3-Contrib

This allows SB3 to maintain a stable and compact core, while still providing the latest features, like Recurrent PPO (PPO LSTM), CrossQ, Truncated Quantile Critics (TQC), Quantile Regression DQN (QR-DQN) or PPO with invalid action masking (Maskable PPO).

Documentation is available online: https://sb3-contrib.readthedocs.io/

Stable-Baselines Jax (SBX)

Stable Baselines Jax (SBX) is a proof of concept version of Stable-Baselines3 in Jax, with recent algorithms like DroQ or CrossQ.

It provides a minimal number of features compared to SB3 but can be much faster (up to 20x times!): https://twitter.com/araffin2/status/1590714558628253698

Installation

Note: Stable-Baselines3 supports PyTorch >= 2.3

Prerequisites

Stable Baselines3 requires Python 3.9+.

Windows

To install stable-baselines on Windows, please look at the documentation.

Install using pip

Install the Stable Baselines3 package: sh pip install 'stable-baselines3[extra]'

This includes optional dependencies like Tensorboard, OpenCV or ale-py to train on atari games. If you do not need those, you can use: sh pip install stable-baselines3

Please read the documentation for more details and alternatives (from source, using docker).

Example

Most of the code in the library tries to follow a sklearn-like syntax for the Reinforcement Learning algorithms.

Here is a quick example of how to train and run PPO on a cartpole environment: ```python import gymnasium as gym

from stable_baselines3 import PPO

env = gym.make("CartPole-v1", render_mode="human")

model = PPO("MlpPolicy", env, verbose=1) model.learn(totaltimesteps=10000)

vecenv = model.getenv() obs = vecenv.reset() for i in range(1000): action, _states = model.predict(obs, deterministic=True) obs, reward, done, info = vecenv.step(action) vec_env.render() # VecEnv resets automatically # if done: # obs = env.reset()

env.close() ```

Or just train a model with a one liner if the environment is registered in Gymnasium and if the policy is registered:

```python from stable_baselines3 import PPO

model = PPO("MlpPolicy", "CartPole-v1").learn(10_000) ```

Please read the documentation for more examples.

Try it online with Colab Notebooks !

All the following examples can be executed online using Google Colab notebooks:

Implemented Algorithms

| Name | Recurrent | Box | Discrete | MultiDiscrete | MultiBinary | Multi Processing | | ------------------- | ------------------ | ------------------ | ------------------ | ------------------- | ------------------ | --------------------------------- | | ARS1 | :x: | :heavycheckmark: | :heavycheckmark: | :x: | :x: | :heavycheckmark: | | A2C | :x: | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | | CrossQ1 | :x: | :heavycheckmark: | :x: | :x: | :x: | :heavycheckmark: | | DDPG | :x: | :heavycheckmark: | :x: | :x: | :x: | :heavycheckmark: | | DQN | :x: | :x: | :heavycheckmark: | :x: | :x: | :heavycheckmark: | | HER | :x: | :heavycheckmark: | :heavycheckmark: | :x: | :x: | :heavycheckmark: | | PPO | :x: | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | | QR-DQN1 | :x: | :x: | :heavycheckmark: | :x: | :x: | :heavycheckmark: | | RecurrentPPO1 | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | | SAC | :x: | :heavycheckmark: | :x: | :x: | :x: | :heavycheckmark: | | TD3 | :x: | :heavycheckmark: | :x: | :x: | :x: | :heavycheckmark: | | TQC1 | :x: | :heavycheckmark: | :x: | :x: | :x: | :heavycheckmark: | | TRPO1 | :x: | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | | Maskable PPO1 | :x: | :x: | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: |

1: Implemented in SB3 Contrib GitHub repository.

Actions gymnasium.spaces: * Box: A N-dimensional box that contains every point in the action space. * Discrete: A list of possible actions, where each timestep only one of the actions can be used. * MultiDiscrete: A list of possible actions, where each timestep only one action of each discrete set can be used. * MultiBinary: A list of possible actions, where each timestep any of the actions can be used in any combination.

Testing the installation

Install dependencies

sh pip install -e .[docs,tests,extra]

Run tests

All unit tests in stable baselines3 can be run using pytest runner: sh make pytest To run a single test file: sh python3 -m pytest -v tests/test_env_checker.py To run a single test: sh python3 -m pytest -v -k 'test_check_env_dict_action'

You can also do a static type check using mypy: sh pip install mypy make type

Codestyle check with ruff: sh pip install ruff make lint

Projects Using Stable-Baselines3

We try to maintain a list of projects using stable-baselines3 in the documentation, please tell us if you want your project to appear on this page ;)

Citing the Project

To cite this repository in publications:

bibtex @article{stable-baselines3, author = {Antonin Raffin and Ashley Hill and Adam Gleave and Anssi Kanervisto and Maximilian Ernestus and Noah Dormann}, title = {Stable-Baselines3: Reliable Reinforcement Learning Implementations}, journal = {Journal of Machine Learning Research}, year = {2021}, volume = {22}, number = {268}, pages = {1-8}, url = {http://jmlr.org/papers/v22/20-1364.html} }

Note: If you need to refer to a specific version of SB3, you can also use the Zenodo DOI.

Maintainers

Stable-Baselines3 is currently maintained by Ashley Hill (aka @hill-a), Antonin Raffin (aka @araffin), Maximilian Ernestus (aka @ernestum), Adam Gleave (@AdamGleave), Anssi Kanervisto (@Miffyli) and Quentin Gallouédec (@qgallouedec).

Important Note: We do not provide technical support, or consulting and do not answer personal questions via email. Please post your question on the RL Discord, Reddit, or Stack Overflow in that case.

How To Contribute

To any interested in making the baselines better, there is still some documentation that needs to be done. If you want to contribute, please read CONTRIBUTING.md guide first.

Acknowledgments

The initial work to develop Stable Baselines3 was partially funded by the project Reduced Complexity Models from the Helmholtz-Gemeinschaft Deutscher Forschungszentren, and by the EU's Horizon 2020 Research and Innovation Programme under grant number 951992 (VeriDream).

The original version, Stable Baselines, was created in the robotics lab U2IS (INRIA Flowers team) at ENSTA ParisTech.

Logo credits: L.M. Tenkes

Owner

  • Name: DLR-RM
  • Login: DLR-RM
  • Kind: organization
  • Location: 48.08329, 11.27507

German Aerospace Center (DLR) - Institute of Robotics and Mechatronics (RM) - open source projects

Citation (CITATION.bib)

@article{stable-baselines3,
  author  = {Antonin Raffin and Ashley Hill and Adam Gleave and Anssi Kanervisto and Maximilian Ernestus and Noah Dormann},
  title   = {Stable-Baselines3: Reliable Reinforcement Learning Implementations},
  journal = {Journal of Machine Learning Research},
  year    = {2021},
  volume  = {22},
  number  = {268},
  pages   = {1-8},
  url     = {http://jmlr.org/papers/v22/20-1364.html}
}

Committers

Last synced: 7 months ago

All Time
  • Total Commits: 856
  • Total Committers: 161
  • Avg Commits per committer: 5.317
  • Development Distribution Score (DDS): 0.391
Past Year
  • Commits: 46
  • Committers: 22
  • Avg Commits per committer: 2.091
  • Development Distribution Score (DDS): 0.5
Top Committers
Name Email Commits
Antonin RAFFIN a****n@e****g 521
Quentin Gallouédec 4****c 51
Noah Dormann N****n@d****e 47
Adam Gleave a****m@g****e 17
Anssi k****1@h****m 10
Alex Pasquali a****8@g****m 8
Juan Rocamonde j****e@g****m 6
PatrickHelm 9****m 5
Thomas Simonini s****o@g****m 4
Stelios Tymvios 5****d 4
M. Ernestus m****n@e****e 4
Corentin 1****r 4
Tobias Rohrer t****r@o****m 3
Sidney Tio 3****o 3
Rohan Tangri 4****o 3
Mark Towers m****s@g****m 3
Chris Schindlbeck c****k@g****m 2
Bernhard Raml B****l@g****t 2
Costa Huang c****g@o****m 2
Dominic Kerr d****1@g****m 2
Francesco Capuano 7****o 2
Grégoire Passault g****t@g****m 2
Jan-Hendrik Ewers me@j****k 2
Marc Duclusaud 5****s 2
Marsel Khisamutdinov c****s@g****m 2
Megan Klaiber m****r@o****m 2
Oleksii Kachaiev k****v@g****m 2
Tom Dörr t****6@g****m 2
Parth Kothari 1****1 2
Paul Scheikl p****l@k****u 2
and 131 more...

Issues and Pull Requests

Last synced: 4 months ago

All Time
  • Total issues: 566
  • Total pull requests: 229
  • Average time to close issues: about 2 months
  • Average time to close pull requests: 2 months
  • Total issue authors: 428
  • Total pull request authors: 75
  • Average comments per issue: 3.06
  • Average comments per pull request: 2.0
  • Merged pull requests: 151
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 106
  • Pull requests: 83
  • Average time to close issues: 12 days
  • Average time to close pull requests: 4 days
  • Issue authors: 93
  • Pull request authors: 26
  • Average comments per issue: 1.69
  • Average comments per pull request: 0.36
  • Merged pull requests: 54
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • npit (8)
  • JaimeParker (8)
  • Kallinteris-Andreas (7)
  • fede72bari (6)
  • araffin (6)
  • PBerit (5)
  • nrigol (5)
  • XiaobenLi00 (5)
  • wilhem (5)
  • suargi (4)
  • d505 (4)
  • aheidariiiiii1993 (4)
  • MetallicaSPA (4)
  • JDRanpariya (4)
  • Familyforever7 (4)
Pull Request Authors
  • araffin (94)
  • qgallouedec (11)
  • markscsmith (6)
  • corentinlger (6)
  • cschindlbeck (6)
  • PatrickHelm (5)
  • Mahsarnzh (5)
  • pseudo-rnd-thoughts (5)
  • JoshuaBluem (4)
  • fracapuano (4)
  • MarcDcls (4)
  • Copilot (4)
  • BertrandDecoster (3)
  • iwishiwasaneagle (3)
  • Zhanwei-Liu (2)
Top Labels
Issue Labels
question (289) custom gym env (117) bug (96) duplicate (87) enhancement (80) more information needed (65) check the checklist (55) help wanted (38) RTFM (38) documentation (34) check the checkboxes (28) No tech support (22) openai gym (20) windows (14) good first issue (7) trading warning (5) colab (4) mac os (1) Maintainers on vacation (1)
Pull Request Labels
PR template not filled (8) experimental (3) check the checklist (3) mac os (2) help wanted (2)

Packages

  • Total packages: 5
  • Total downloads:
    • pypi 678,109 last-month
  • Total docker downloads: 3,126
  • Total dependent packages: 88
    (may contain duplicates)
  • Total dependent repositories: 647
    (may contain duplicates)
  • Total versions: 168
  • Total maintainers: 6
pypi.org: stable-baselines3

Pytorch version of Stable Baselines, implementations of reinforcement learning algorithms.

  • Versions: 106
  • Dependent Packages: 88
  • Dependent Repositories: 641
  • Downloads: 678,096 Last month
  • Docker Downloads: 3,126
Rankings
Dependent packages count: 0.3%
Stargazers count: 0.3%
Dependent repos count: 0.5%
Average: 0.9%
Downloads: 1.0%
Forks count: 1.1%
Docker downloads count: 2.0%
Last synced: 4 months ago
proxy.golang.org: github.com/DLR-RM/stable-baselines3
  • Versions: 29
  • Dependent Packages: 0
  • Dependent Repositories: 1
Rankings
Forks count: 0.8%
Stargazers count: 0.9%
Average: 4.0%
Dependent repos count: 4.7%
Dependent packages count: 9.6%
Last synced: 4 months ago
proxy.golang.org: github.com/dlr-rm/stable-baselines3
  • Versions: 29
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Forks count: 0.7%
Stargazers count: 0.9%
Average: 4.7%
Dependent packages count: 7.9%
Dependent repos count: 9.3%
Last synced: 4 months ago
pypi.org: rigged-sb3

Experimental version of SB3

  • Versions: 1
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 13 Last month
Rankings
Stargazers count: 0.4%
Forks count: 1.2%
Dependent packages count: 6.6%
Average: 16.0%
Dependent repos count: 30.6%
Downloads: 41.1%
Maintainers (1)
Last synced: 4 months ago
conda-forge.org: stable-baselines3
  • Versions: 3
  • Dependent Packages: 0
  • Dependent Repositories: 5
Rankings
Forks count: 4.4%
Stargazers count: 4.8%
Dependent repos count: 14.8%
Average: 18.9%
Dependent packages count: 51.6%
Last synced: 4 months ago

Dependencies

setup.py pypi
  • For *
  • Plotting *
  • cloudpickle *
  • gym ==0.21
  • matplotlib *
  • numpy *
  • pandas *
  • torch >=1.11