routerl
RouteRL is a multi-agent reinforcement learning framework for modeling and simulating the collective route choices of humans and autonomous vehicles.
Science Score: 36.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (12.7%) to scientific vocabulary
Keywords
Repository
RouteRL is a multi-agent reinforcement learning framework for modeling and simulating the collective route choices of humans and autonomous vehicles.
Basic Info
- Host: GitHub
- Owner: COeXISTENCE-PROJECT
- License: mit
- Language: Jupyter Notebook
- Default Branch: main
- Homepage: https://coexistence-project.github.io/RouteRL/
- Size: 743 MB
Statistics
- Stars: 26
- Watchers: 4
- Forks: 9
- Open Issues: 1
- Releases: 3
Topics
Metadata Files
README.md

RouteRL
Multi-Agent Reinforcement Learning framework for modeling and simulating the collective route choices of humans and autonomous vehicles.
RouteRL is a novel framework that integrates Multi-Agent Reinforcement Learning (MARL) with a microscopic traffic simulation, SUMO, facilitating the testing and development of efficient route choice strategies. The proposed framework simulates the daily route choices of driver agents in a city, including two types: - human drivers, emulated using discrete choice models, - and AVs, modeled as MARL agents optimizing their policies for a predefined objective.
RouteRL aims to advance research in MARL, traffic assignment problems, social reinforcement learning (RL), and human-AI interaction for transportation applications.
For overview see the paper and for more details, check the documentation online.
RouteRL usage and functionalities at a glance
The following is a simplified code of a possible standard MARL algorithm implementation via TorchRL.
```python env = TrafficEnvironment(seed=42, **env_params) # initialize the traffic environment
env.start() # start the connection with SUMO
for episode in range(humanlearningepisodes): # human learning env.step()
env.mutation() # some human agents transition to AV agents
collector = SyncDataCollector(env, policy, ...) # collects experience by running the policy in the environment (TorchRL)
training of the autonomous vehicles; human agents follow fixed decisions learned in their learning phase
for tensordict_data in collector:
# update the policies of the learning agents
for _ in range(num_epochs):
subdata = replay_buffer.sample()
loss_vals = loss_module(subdata)
optimizer.step()
collector.update_policy_weights_()
policy.eval() # set the policy into evaluation mode
testing phase using the already trained policy
numepisodes = 100 for episode in range(numepisodes): env.rollout(len(env.machine_agents), policy=policy)
env.plotresults() # plot the results env.stopsimulation() # stop the connection with SUMO ```
Documentation
Installation
- Prerequisite: Make sure you have SUMO installed in your system. This procedure should be carried out separately, by following the instructions provided here.
- Option 1: Install the latest stable version from PyPI:
pip install routerl - Option 2: Clone this repository for latest version, and manually install its dependencies:
git clone https://github.com/COeXISTENCE-PROJECT/RouteRL.git cd RouteRL pip install -r requirements.txt
Reproducibility capsule
We have an experiment script encapsulated in a CodeOcean capsule. This capsule allows demonstrating RouteRL's capabilities without the need for SUMO installation or dependency management. 1. Visit the capsule link. 2. Create a free CodeOcean account (if you dont have one). 3. Click Reproducible Run to execute the code in a controlled and reproducible environment.
Credits
RouteRL is part of COeXISTENCE (ERC Starting Grant, grant agreement No 101075838) and is a team work at Jagiellonian University in Krakw, Poland by: Ahmet Onur Akman and Anastasia Psarou (main contributors) supported by Grzegorz Jamroz, Zoltn Varga, ukasz Gorczyca, Micha Hoffman and others, within the research group of Rafa Kucharski.
Owner
- Name: COeXISTENCE-PROJECT
- Login: COeXISTENCE-PROJECT
- Kind: organization
- Repositories: 1
- Profile: https://github.com/COeXISTENCE-PROJECT
GitHub Events
Total
- Create event: 21
- Release event: 2
- Issues event: 32
- Watch event: 28
- Delete event: 20
- Issue comment event: 22
- Push event: 768
- Pull request event: 16
- Fork event: 6
Last Year
- Create event: 21
- Release event: 2
- Issues event: 32
- Watch event: 28
- Delete event: 20
- Issue comment event: 22
- Push event: 768
- Pull request event: 16
- Fork event: 6
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 17
- Total pull requests: 9
- Average time to close issues: 27 days
- Average time to close pull requests: 1 day
- Total issue authors: 3
- Total pull request authors: 2
- Average comments per issue: 1.18
- Average comments per pull request: 0.0
- Merged pull requests: 6
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 16
- Pull requests: 9
- Average time to close issues: 6 days
- Average time to close pull requests: 1 day
- Issue authors: 3
- Pull request authors: 2
- Average comments per issue: 1.25
- Average comments per pull request: 0.0
- Merged pull requests: 6
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- aonurakman (14)
- RafalKucharskiPK (2)
- AnastasiaPsarou (1)
Pull Request Authors
- aonurakman (8)
- dg7s (2)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 2
-
Total downloads:
- pypi 32 last-month
-
Total dependent packages: 0
(may contain duplicates) -
Total dependent repositories: 0
(may contain duplicates) - Total versions: 3
- Total maintainers: 1
pypi.org: routerlurb
RouteRL is a multi-agent reinforcement learning framework for urban route choice in different city networks. This subpackage is developed to support its compatibility with URB until the full integration.
- Documentation: https://routerlurb.readthedocs.io/
- License: MIT License
-
Latest release: 1.0.0
published 9 months ago
Rankings
Maintainers (1)
pypi.org: routerl
RouteRL is a multi-agent reinforcement learning framework for urban route choice in different city networks.
- Documentation: https://routerl.readthedocs.io/
- License: MIT License
-
Latest release: 1.0.1
published 10 months ago
Rankings
Maintainers (1)
Dependencies
- actions/checkout v2.3.4 composite
- actions/setup-python v2 composite
- beautifulsoup4 ==4.12.3
- gymnasium ==0.29.1
- matplotlib ==3.9.2
- networkx ==3.1
- numpy ==2.1.2
- pandas ==2.2.3
- pettingzoo ==1.24.3
- polars ==1.9.0
- prettytable ==3.11.0
- seaborn ==0.13.2
- torch ==2.4.0
- torchrl ==0.5.0
- traci ==1.21.0
- actions/checkout v4 composite
- actions/setup-python v4 composite
- actions/checkout v4 composite
- actions/setup-python v4 composite
- registry.codeocean.com/codeocean/miniconda3 4.12.0-python3.12-ubuntu22.04 build
- gymnasium *
- janux *
- matplotlib *
- numpy *
- pandas *
- pettingzoo *
- polars *
- prettytable *
- seaborn *
- tensordict *
- torch *
- torchrl *
- tqdm *
- traci *
- actions/checkout v4 composite
- actions/setup-python v4 composite