rl4co
A PyTorch library for all things Reinforcement Learning (RL) for Combinatorial Optimization (CO)
Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (12.5%) to scientific vocabulary
Keywords
Repository
A PyTorch library for all things Reinforcement Learning (RL) for Combinatorial Optimization (CO)
Basic Info
- Host: GitHub
- Owner: ai4co
- License: mit
- Language: Python
- Default Branch: main
- Homepage: https://rl4co.ai4co.org
- Size: 155 MB
Statistics
- Stars: 657
- Watchers: 11
- Forks: 118
- Open Issues: 4
- Releases: 20
Topics
Metadata Files
README.md
An extensive Reinforcement Learning (RL) for Combinatorial Optimization (CO) benchmark. Our goal is to provide a unified framework for RL-based CO algorithms, and to facilitate reproducible research in this field, decoupling the science from the engineering.
RL4CO is built upon: - TorchRL: official PyTorch framework for RL algorithms and vectorized environments on GPUs - TensorDict: a library to easily handle heterogeneous data such as states, actions and rewards - PyTorch Lightning: a lightweight PyTorch wrapper for high-performance AI research - Hydra: a framework for elegantly configuring complex applications
We offer flexible and efficient implementations of the following policies: - Constructive: learn to construct a solution from scratch - Autoregressive (AR): construct solutions one step at a time via a decoder - NonAutoregressive (NAR): learn to predict a heuristic, such as a heatmap, to then construct a solution - Improvement: learn to improve a pre-existing solution
We provide several utilities and modularization. For example, we modularize reusable components such as environment embeddings that can easily be swapped to solve new problems.
Getting started
RL4CO is now available for installation on pip!
bash
pip install rl4co
To get started, we recommend checking out our quickstart notebook or the minimalistic example below.
Install from source
This command installs the bleeding edge main version, useful for staying up-to-date with the latest developments - for instance, if a bug has been fixed since the last official release but a new release hasn’t been rolled out yet:
bash
pip install -U git+https://github.com/ai4co/rl4co.git
Local install and development
We recommend local development with the blazing-fast uv package manager, for instance:
bash
git clone https://github.com/ai4co/rl4co && cd rl4co
uv sync --all-extras
source .venv/bin/activate
This will create a new virtual environment in .venv/ and install all dependencies.
Usage
Train model with default configuration (AM on TSP environment):
bash
python run.py
[!TIP] You may check out this notebook to get started with Hydra!
Change experiment settings
Train model with chosen experiment configuration from [configs/experiment/](configs/experiment/) ```bash python run.py experiment=routing/am env=tsp env.num_loc=50 model.optimizer_kwargs.lr=2e-4 ``` Here you may change the environment, e.g. with `env=cvrp` by command line or by modifying the corresponding experiment e.g. [configs/experiment/routing/am.yaml](configs/experiment/routing/am.yaml).Disable logging
```bash python run.py experiment=routing/am logger=none '~callbacks.learning_rate_monitor' ``` Note that `~` is used to disable a callback that would need a logger.Create a sweep over hyperparameters (-m for multirun)
```bash python run.py -m experiment=routing/am model.optimizer.lr=1e-3,1e-4,1e-5 ```Minimalistic Example
Here is a minimalistic example training the Attention Model with greedy rollout baseline on TSP in less than 30 lines of code:
```python from rl4co.envs.routing import TSPEnv, TSPGenerator from rl4co.models import AttentionModelPolicy, POMO from rl4co.utils import RL4COTrainer
Instantiate generator and environment
generator = TSPGenerator(numloc=50, locdistribution="uniform") env = TSPEnv(generator)
Create policy and RL model
policy = AttentionModelPolicy(envname=env.name, numencoderlayers=6) model = POMO(env, policy, batchsize=64, optimizer_kwargs={"lr": 1e-4})
Instantiate Trainer and fit
trainer = RL4COTrainer(max_epochs=10, accelerator="gpu", precision="16-mixed") trainer.fit(model) ```
Other examples can be found on our documentation!
Testing
Run tests with pytest from the root directory:
bash
pytest tests
Known Bugs
You may check out the issues and discussions. We will also periodically post updates on the FAQ section.
Contributing
Have a suggestion, request, or found a bug? Feel free to open an issue or submit a pull request. If you would like to contribute, please check out our contribution guidelines here. We welcome and look forward to all contributions to RL4CO!
We are also on Slack if you have any questions or would like to discuss RL4CO with us. We are open to collaborations and would love to hear from you 🚀
Contributors
Citation
If you find RL4CO valuable for your research or applied projects:
bibtex
@inproceedings{berto2025rl4co,
title={{RL4CO: an Extensive Reinforcement Learning for Combinatorial Optimization Benchmark}},
author={Federico Berto and Chuanbo Hua and Junyoung Park and Laurin Luttmann and Yining Ma and Fanchen Bu and Jiarui Wang and Haoran Ye and Minsu Kim and Sanghyeok Choi and Nayeli Gast Zepeda and Andr\'e Hottung and Jianan Zhou and Jieyi Bi and Yu Hu and Fei Liu and Hyeonah Kim and Jiwoo Son and Haeyeon Kim and Davide Angioni and Wouter Kool and Zhiguang Cao and Jie Zhang and Kijung Shin and Cathy Wu and Sungsoo Ahn and Guojie Song and Changhyun Kwon and Lin Xie and Jinkyoo Park},
booktitle={Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining},
year={2025},
url={https://github.com/ai4co/rl4co}
}
Note that a previous version of RL4CO was also accepted as an oral presentation at the NeurIPS 2023 GLFrontiers Workshop. Since then, the library has greatly evolved and improved!
Join us
We invite you to join our AI4CO community, an open research group in Artificial Intelligence (AI) for Combinatorial Optimization (CO)!
Owner
- Name: ai4co
- Login: ai4co
- Kind: organization
- Repositories: 1
- Profile: https://github.com/ai4co
Citation (CITATION.cff)
cff-version: 1.2.0
message: If you use this software, please cite it as below.
title: RL4CO
authors:
- family-names: AI4CO
url: https://github.com/ai4co/rl4co
preferred-citation:
type: conference-paper
title: "RL4CO: an Extensive Reinforcement Learning for Combinatorial Optimization Benchmark"
authors:
- family-names: Berto
given-names: Federico
- family-names: Hua
given-names: Chuanbo
- family-names: Park
given-names: Junyoung
- family-names: Luttmann
given-names: Laurin
- family-names: Ma
given-names: Yining
- family-names: Bu
given-names: Fanchen
- family-names: Wang
given-names: Jiarui
- family-names: Ye
given-names: Haoran
- family-names: Kim
given-names: Minsu
- family-names: Choi
given-names: Sanghyeok
- family-names: Gast Zepeda
given-names: Nayeli
- family-names: Hottung
given-names: André
- family-names: Zhou
given-names: Jianan
- family-names: Bi
given-names: Jieyi
- family-names: Hu
given-names: Yu
- family-names: Liu
given-names: Fei
- family-names: Kim
given-names: Hyeonah
- family-names: Son
given-names: Jiwoo
- family-names: Kim
given-names: Haeyeon
- family-names: Angioni
given-names: Davide
- family-names: Kool
given-names: Wouter
- family-names: Cao
given-names: Zhiguang
- family-names: Zhang
given-names: Jie
- family-names: Shin
given-names: Kijung
- family-names: Wu
given-names: Cathy
- family-names: Ahn
given-names: Sungsoo
- family-names: Song
given-names: Guojie
- family-names: Kwon
given-names: Changhyun
- family-names: Xie
given-names: Lin
- family-names: Park
given-names: Jinkyoo
collection-title: KDD
year: 2025
url: https://github.com/ai4co/rl4co
GitHub Events
Total
- Create event: 7
- Release event: 2
- Issues event: 34
- Watch event: 203
- Delete event: 1
- Issue comment event: 70
- Push event: 56
- Pull request review event: 18
- Pull request review comment event: 13
- Pull request event: 30
- Fork event: 42
Last Year
- Create event: 7
- Release event: 2
- Issues event: 34
- Watch event: 203
- Delete event: 1
- Issue comment event: 70
- Push event: 56
- Pull request review event: 18
- Pull request review comment event: 13
- Pull request event: 30
- Fork event: 42
Committers
Last synced: about 2 years ago
Top Committers
| Name | Commits | |
|---|---|---|
| Federico Berto | b****2@g****m | 651 |
| Chuanbo Hua | c****a@k****r | 87 |
| junyoungpark | j****k@k****r | 37 |
| fedebotu | f****u | 6 |
| ngastzepeda | n****a@o****m | 5 |
| Zymrael | m****i@h****t | 3 |
| Junyoung Park | j****5@g****m | 2 |
| Junyoung Park | j****l@g****m | 2 |
| henry-yeh | h****e@o****m | 2 |
| hyeok9855 | h****5@g****m | 2 |
| Hyeonah Kim | h****m@k****r | 2 |
| BU Fanchen 卜凡辰 | b****7@k****r | 1 |
| Minsu | u****l | 1 |
| hyeonah_kim | g****5@g****m | 1 |
| Hyeonah Kim | 3****m | 1 |
| Ikko Eltociear Ashimine | e****r@g****m | 1 |
Committer Domains (Top 20 + Academic)
Packages
- Total packages: 2
-
Total downloads:
- pypi 914 last-month
- Total docker downloads: 597
-
Total dependent packages: 0
(may contain duplicates) -
Total dependent repositories: 2
(may contain duplicates) - Total versions: 45
- Total maintainers: 3
proxy.golang.org: github.com/ai4co/rl4co
- Documentation: https://pkg.go.dev/github.com/ai4co/rl4co#section-documentation
- License: mit
-
Latest release: v0.6.0
published 9 months ago
Rankings
pypi.org: rl4co
RL4CO: an Extensive Reinforcement Learning for Combinatorial Optimization Benchmark
- Homepage: https://rl4.co
- Documentation: https://rl4co.readthedocs.io
- License: MIT License
-
Latest release: 0.6.0
published 9 months ago
Rankings
Maintainers (3)
Dependencies
- actions/checkout v3 composite
- actions/download-artifact v3 composite
- actions/setup-python v4 composite
- actions/upload-artifact v3 composite
- pypa/gh-action-pypi-publish release/v1 composite
- actions/cache v3 composite
- actions/checkout v3 composite
- actions/setup-python v4 composite
- codecov/codecov-action v3 composite
- docutils >=0.16,<0.20
- jinja2 >=3.0.0,<3.2.0
- myst-parser ==2.0.0
- nbsphinx >=0.8.5,<=0.8.9
- pandoc >=1.0,<=2.3
- rl4co *
- sphinx >6.0,<7.0
- sphinx-autobuild *
- sphinx-autodoc-typehints >=1.16
- sphinx-copybutton >=0.3,<=0.5.2
- sphinx-multiproject *
- sphinx-paramlinks >=0.5.1,<=0.5.4
- sphinx-rtd-dark-mode *
- sphinx-togglebutton >=0.2,<=0.3.2
- sphinx-toolbox ==3.4.0
- sphinxcontrib-fulltoc >=1.0,<=1.2.0
- sphinxcontrib-mockautodoc *
- sphinxcontrib-video ==0.2.0
- einops *
- hydra-colorlog *
- hydra-core *
- lightning >=2.0.5
- matplotlib *
- omegaconf *
- pyrootutils *
- rich *
- scipy *
- tensordict >=0.1.1
- torch >=2.0.0
- torchrl >=0.1.1
- wandb *