sumo-rl

Reinforcement Learning environments for Traffic Signal Control with SUMO. Compatible with Gymnasium, PettingZoo, and popular RL libraries.

https://github.com/lucasalegre/sumo-rl

Science Score: 67.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 5 DOI reference(s) in README
  • Academic publication links
    Links to: sciencedirect.com, springer.com, wiley.com, ieee.org, zenodo.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (18.0%) to scientific vocabulary

Keywords

deep-reinforcement-learning gym gym-env gymnasium machine-learning pettingzoo python reinforcement-learning rl-algorithms sumo traffic-signal-control

Keywords from Contributors

gym-environment interactive mesh interpretability sequences generic projection optim hacking network-simulation
Last synced: 6 months ago · JSON representation ·

Repository

Reinforcement Learning environments for Traffic Signal Control with SUMO. Compatible with Gymnasium, PettingZoo, and popular RL libraries.

Basic Info
Statistics
  • Stars: 894
  • Watchers: 11
  • Forks: 229
  • Open Issues: 15
  • Releases: 10
Topics
deep-reinforcement-learning gym gym-env gymnasium machine-learning pettingzoo python reinforcement-learning rl-algorithms sumo traffic-signal-control
Created about 7 years ago · Last pushed 7 months ago
Metadata Files
Readme License Citation

README.md

DOI tests PyPI version pre-commit Code style: black License

SUMO-RL

SUMO-RL provides a simple interface to instantiate Reinforcement Learning (RL) environments with SUMO for Traffic Signal Control.

Goals of this repository: - Provide a simple interface to work with Reinforcement Learning for Traffic Signal Control using SUMO - Support Multiagent RL - Compatibility with gymnasium.Env and popular RL libraries such as stable-baselines3 and RLlib - Easy customisation: state and reward definitions are easily modifiable

The main class is SumoEnvironment. If instantiated with parameter 'single-agent=True', it behaves like a regular Gymnasium Env. For multiagent environments, use env or parallel_env to instantiate a PettingZoo environment with AEC or Parallel API, respectively. TrafficSignal is responsible for retrieving information and actuating on traffic lights using TraCI API.

For more details, check the documentation online.

Install

Install SUMO latest version:

bash sudo add-apt-repository ppa:sumo/stable sudo apt-get update sudo apt-get install sumo sumo-tools sumo-doc Don't forget to set SUMOHOME variable (default sumo installation path is /usr/share/sumo) ```bash echo 'export SUMOHOME="/usr/share/sumo"' >> ~/.bashrc source ~/.bashrc Important: for a huge performance boost (~8x) with Libsumo, you can declare the variable: bash export LIBSUMOASTRACI=1 ``` Notice that you will not be able to run with sumo-gui or with multiple simulations in parallel if this is active (more details).

Install SUMO-RL

Stable release version is available through pip bash pip install sumo-rl

Alternatively, you can install using the latest (unreleased) version bash git clone https://github.com/LucasAlegre/sumo-rl cd sumo-rl pip install -e .

MDP - Observations, Actions and Rewards

Observation

The default observation for each traffic signal agent is a vector: python obs = [phase_one_hot, min_green, lane_1_density,...,lane_n_density, lane_1_queue,...,lane_n_queue] - phase_one_hot is a one-hot encoded vector indicating the current active green phase - min_green is a binary variable indicating whether mingreen seconds have already passed in the current phase - ```laneidensityis the number of vehicles in incoming lane i dividided by the total capacity of the lane -lanei_queue```is the number of queued (speed below 0.1 m/s) vehicles in incoming lane i divided by the total capacity of the lane

You can define your own observation by implementing a class that inherits from ObservationFunction and passing it to the environment constructor.

Action

The action space is discrete. Every 'delta_time' seconds, each traffic signal agent can choose the next green phase configuration.

E.g.: In the 2-way single intersection there are |A| = 4 discrete actions, corresponding to the following green phase configurations:

Important: every time a phase change occurs, the next phase is preeceded by a yellow phase lasting yellow_time seconds.

Rewards

The default reward function is the change in cumulative vehicle delay:

That is, the reward is how much the total delay (sum of the waiting times of all approaching vehicles) changed in relation to the previous time-step.

You can choose a different reward function (see the ones implemented in TrafficSignal) with the parameter reward_fn in the SumoEnvironment constructor.

It is also possible to implement your own reward function:

```python def myrewardfn(trafficsignal): return trafficsignal.getaveragespeed()

env = SumoEnvironment(..., rewardfn=myreward_fn) ```

API's (Gymnasium and PettingZoo)

Gymnasium Single-Agent API

If your network only has ONE traffic light, then you can instantiate a standard Gymnasium env (see Gymnasium API): python import gymnasium as gym import sumo_rl env = gym.make('sumo-rl-v0', net_file='path_to_your_network.net.xml', route_file='path_to_your_routefile.rou.xml', out_csv_name='path_to_output.csv', use_gui=True, num_seconds=100000) obs, info = env.reset() done = False while not done: next_obs, reward, terminated, truncated, info = env.step(env.action_space.sample()) done = terminated or truncated

PettingZoo Multi-Agent API

For multi-agent environments, you can use the PettingZoo API (see Petting Zoo API):

python import sumo_rl env = sumo_rl.parallel_env(net_file='nets/RESCO/grid4x4/grid4x4.net.xml', route_file='nets/RESCO/grid4x4/grid4x4_1.rou.xml', use_gui=True, num_seconds=3600) observations = env.reset() while env.agents: actions = {agent: env.action_space(agent).sample() for agent in env.agents} # this is where you would insert your policy observations, rewards, terminations, truncations, infos = env.step(actions)

RESCO Benchmarks

In the folder nets/RESCO you can find the network and route files from RESCO (Reinforcement Learning Benchmarks for Traffic Signal Control), which was built on top of SUMO-RL. See their paper for results.

Experiments

Check experiments for examples on how to instantiate an environment and train your RL agent.

Q-learning in a one-way single intersection:

bash python experiments/ql_single-intersection.py

RLlib PPO multiagent in a 4x4 grid:

bash python experiments/ppo_4x4grid.py

stable-baselines3 DQN in a 2-way single intersection:

Obs: you need to install stable-baselines3 with pip install "stable_baselines3[extra]>=2.0.0a9" for Gymnasium compatibility. bash python experiments/dqn_2way-single-intersection.py

Plotting results:

bash python outputs/plot.py -f outputs/4x4grid/ppo_conn0_ep2

Citing

If you use this repository in your research, please cite: bibtex @misc{sumorl, author = {Lucas N. Alegre}, title = {{SUMO-RL}}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/LucasAlegre/sumo-rl}}, }

List of publications that use SUMO-RL (please open a pull request to add missing entries): - Quantifying the impact of non-stationarity in reinforcement learning-based traffic signal control (Alegre et al., 2021) - Information-Theoretic State Space Model for Multi-View Reinforcement Learning (Hwang et al., 2023) - A citywide TD-learning based intelligent traffic signal control for autonomous vehicles: Performance evaluation using SUMO (Reza et al., 2023) - Handling uncertainty in self-adaptive systems: an ontology-based reinforcement learning model (Ghanadbashi et al., 2023) - Multiagent Reinforcement Learning for Traffic Signal Control: a k-Nearest Neighbors Based Approach (Almeida et al., 2022) - From Local to Global: A Curriculum Learning Approach for Reinforcement Learning-based Traffic Signal Control (Zheng et al., 2022) - Poster: Reliable On-Ramp Merging via Multimodal Reinforcement Learning (Bagwe et al., 2022) - Using ontology to guide reinforcement learning agents in unseen situations (Ghanadbashi & Golpayegani, 2022) - Information upwards, recommendation downwards: reinforcement learning with hierarchy for traffic signal control (Antes et al., 2022) - A Comparative Study of Algorithms for Intelligent Traffic Signal Control (Chaudhuri et al., 2022) - An Ontology-Based Intelligent Traffic Signal Control Model (Ghanadbashi & Golpayegani, 2021) - Reinforcement Learning Benchmarks for Traffic Signal Control (Ault & Sharon, 2021) - EcoLight: Reward Shaping in Deep Reinforcement Learning for Ergonomic Traffic Signal Control (Agand et al., 2021)

Owner

  • Name: Lucas Alegre
  • Login: LucasAlegre
  • Kind: user
  • Location: Porto Alegre
  • Company: Institute of Informatics - UFRGS

PhD student at Institute of Informatics - UFRGS. Interested in reinforcement learning, machine learning and artificial (neuro-inspired) intelligence.

Citation (CITATION.bib)

@misc{AlegreSUMORL,
    author = {Lucas N. Alegre},
    title = {{SUMO-RL}},
    year = {2019},
    publisher = {GitHub},
    journal = {GitHub repository},
    howpublished = {\url{https://github.com/LucasAlegre/sumo-rl}},
}

GitHub Events

Total
  • Issues event: 14
  • Watch event: 162
  • Issue comment event: 19
  • Push event: 16
  • Pull request event: 10
  • Fork event: 43
  • Create event: 1
Last Year
  • Issues event: 14
  • Watch event: 162
  • Issue comment event: 19
  • Push event: 16
  • Pull request event: 10
  • Fork event: 43
  • Create event: 1

Committers

Last synced: 11 months ago

All Time
  • Total Commits: 215
  • Total Committers: 9
  • Avg Commits per committer: 23.889
  • Development Distribution Score (DDS): 0.084
Past Year
  • Commits: 20
  • Committers: 3
  • Avg Commits per committer: 6.667
  • Development Distribution Score (DDS): 0.1
Top Committers
Name Email Commits
Lucas Alegre l****e@g****m 197
J K Terry j****y@g****m 5
Filip Kalus k****5@g****m 4
Max Schumacher m****2@g****m 3
Ariel Kwiatkowski a****i@g****m 2
dependabot[bot] 4****] 1
Michal Gregor m****l@g****k 1
Marco Magni 1****5 1
Kevyn Kelso 5****o 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 150
  • Total pull requests: 39
  • Average time to close issues: 3 months
  • Average time to close pull requests: 12 days
  • Total issue authors: 89
  • Total pull request authors: 20
  • Average comments per issue: 2.83
  • Average comments per pull request: 2.13
  • Merged pull requests: 22
  • Bot issues: 0
  • Bot pull requests: 1
Past Year
  • Issues: 9
  • Pull requests: 13
  • Average time to close issues: 17 days
  • Average time to close pull requests: 1 day
  • Issue authors: 6
  • Pull request authors: 6
  • Average comments per issue: 1.33
  • Average comments per pull request: 0.46
  • Merged pull requests: 6
  • Bot issues: 0
  • Bot pull requests: 1
Top Authors
Issue Authors
  • jkterry1 (13)
  • locker2153 (5)
  • FanFan2021 (4)
  • luckywlj (4)
  • prajwalvinod (4)
  • Sitting-Down (4)
  • kurkurzz (4)
  • TrinhTuanHung2021 (4)
  • gioannides (3)
  • Gavin-Tao (3)
  • DryCell-x (3)
  • smarianimore (3)
  • deltag0 (3)
  • Aegis1863 (3)
  • jackli100 (2)
Pull Request Authors
  • jkterry1 (5)
  • LucasAlegre (5)
  • Kunal-Kumar-Sahoo (3)
  • firemankoxd (3)
  • ankitdipto (3)
  • Daraan (3)
  • sitexa (2)
  • RedTachyon (2)
  • dependabot[bot] (2)
  • MoonGlow22 (2)
  • frefolli (2)
  • Loveen01 (2)
  • magni5 (2)
  • vibhamasti (1)
  • x-yang1021 (1)
Top Labels
Issue Labels
Pull Request Labels
enhancement (6) dependencies (2) bug (1)