multi_agent_reinforcement_learning
https://github.com/jungar111/multi_agent_reinforcement_learning
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (13.7%) to scientific vocabulary
Repository
Basic Info
- Host: GitHub
- Owner: Jungar111
- Language: Jupyter Notebook
- Default Branch: main
- Size: 8.37 MB
Statistics
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
- Releases: 0
Metadata Files
README.md
Multi-Agent Reinforcement Learning
- Asger Sturis Tang, s184305
- Frederik Møller Sørensen, s184306
- Joachim Pors Andreassen, s184289
Introduction
This repository contains the code for our master thesis at The Technical University of Denmark (DTU). The project focuses on multi-agent reinforcement learning, specifically developing pricing and rebalancing strategies for urban mobility in different cities using the Soft Actor-Critic algorithm (SAC) and the Advantage Actor-Critic algorithm (A2C).
Setup
Requirements
- Python 3.10.xx
- Poetry for dependency management
- CPLEX 22.1.1 - Requires license or a student account!
Initialise project
Before doing the below, make sure you have completed the appropriate steps to install all of the correct requirements.
- Clone the repository:
bash git clone https://github.com/Jungar111/multi_agent_reinforcement_learning - Navigate to the cloned directory:
bash cd multi_agent_reinforcement_learning - Get the latest .lock file using Poetry:
bash poetry lock - Install dependencies using Poetry:
bash poetry install
Usage
The project includes two main scripts:
main.py: Runs the A2C (Advantage Actor-Critic) algorithm.main_SAC.py: Executes the Soft Actor-Critic (SAC) algorithm.
To run the A2C algorithm for San Francisco, use:
bash
python main.py
Or for the SAC algorithm:
bash
python main_SAC.py
Customization
You can customize the city for simulation by modifying the data source in the script. The project defaults to San Francisco but supports other cities included in the data folder.
Project Structure
data: Contains city-specific data for simulations.images: Stores images and visual assets.multi_agent_reinforcement_learning: Main module containing:algos: Algorithm implementations.build: Compiled files.cplex_mod: CPLEX model files.data_models: Data models for the project.envs: Environment configurations for the RL agents.evaluation: Evaluation scripts and utilities.misc: Miscellaneous scripts and files.plots: Code for generating plots.saved_files: Saved checkpoints and logs.utils: Utility scripts and helpers.
notebooks: Jupyter notebooks for exploratory data analysis and visualizations.saved_files: Contains RL logs, CPLEX logs, and checkpoints.- Note that the
saved_filesrequires the following sub-folder architecture, which is not inherited from the repo: ckptscenario_{city}
cplex_logsmatchingscenario_{city}
rebalancingscenario_{city}
- Note that the
Acknowledgments
This work was conducted as part of a master thesis at DTU. We would like to thank Francisco Camara Pereira, Filipe Rodrigues, Carolin Samanta Schmidt and DTU for their support, guidance and endless fruitful discussions.
Owner
- Login: Jungar111
- Kind: user
- Repositories: 1
- Profile: https://github.com/Jungar111
Citation (CITATION.cff)
cff-version: 0.0.1 message: "If you use this code, or parts of it, please cite it using the below." authors: - family-names: Andreasen given-names: Joachim Pors - family-names: Sørensen given-names: Frederik Møller - family-names: Tang given-names: Asger Sturis title: "Multi Agent Reinforcement Learning Methods" version: 1.0.0 url: "https://github.com/Jungar111/multi_agent_reinforcement_learning" date-released: 2023-08-31