pcrl
Reinforcement Learning-based Placement of Charging Stations in Urban Road Networks
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (10.3%) to scientific vocabulary
Repository
Reinforcement Learning-based Placement of Charging Stations in Urban Road Networks
Basic Info
Statistics
- Stars: 11
- Watchers: 3
- Forks: 4
- Open Issues: 10
- Releases: 0
Metadata Files
README.md
PCRL
Reinforcement Learning-based Placement of Charging Stations in Urban Road Networks
Description
This repository is the implementation of the project "Reinforcement Learning-based Placement of Charging Stations in Urban Road Networks" by Leonie von Wahl, Nicolas Tempelmeier, Ashutosh Sao and Elena Demidova. In this project, we train a model with Deep Q Network Reinforcement Learning to place charging stations in a road network.
Installation
We use Stable Baselines3 as Reinforcement training framework. Moreover, we use Gym to create the RL environment and OSMnx to work with road networks. We use Python 3.8.10.
To install the requirements
bash
git clone git@github.com:frantz03/PCRL.git
pip install -r final_requirements.txt
Toy Example Dataset
Before training the models, some data are needed. The data preparation can be done with
load_graph.py and nodes_preparation.py. However, we will not upload our own data here.
Instead, we offer a preprocessed toy example of fake data. With this toy example, the training and
evaluation can be tested.
Training & Evaluation
To train a model on an example raod network run reinforcement.py. The custom environment
is described in env_plus.py.
To generate a charging plan from the model run model_eval.py.
Finally, to evaluate the charging plan with the metrics from the utility model ( in
evaluation_framework.py) run test_solution.py.
Visualisation
To visulalise the results, run visualise.py.
Folder structure
For the data: Graph/<location>/
For the images: Images/<location>/
For the results: Results/<location>/
Owner
- Name: Ashutosh Sao
- Login: ashusao
- Kind: user
- Location: Hannover, Germany
- Repositories: 1
- Profile: https://github.com/ashusao
A Machine Learning Enthusiast, pursuing PhD. in the same at L3S Research Cebter, Hannover.
Citation (CITATION.cff)
# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!
cff-version: 1.2.0
title: >-
Reinforcement Learning-based Placement of Charging
Stations in Urban Road Networks
message: >-
If you use this software, please cite it using the
metadata from this file.
type: software
authors:
- given-names: Leonie
family-names: von Wahl
orcid: 'https://orcid.org/0000-0003-0013-831X'
- given-names: 'Nicolas '
family-names: Tempelmeier
- given-names: Ashutosh
family-names: Sao
- given-names: 'Elena '
family-names: Demidova
repository-code: 'https://github.com/ashusao/PCRL'
license: MIT
date-released: '2022-11-04'
GitHub Events
Total
- Issues event: 1
- Watch event: 8
Last Year
- Issues event: 1
- Watch event: 8
Dependencies
- Fiona ==1.8.21
- Pillow ==9.1.1
- Rtree ==1.0.0
- Shapely ==1.8.2
- attrs ==21.4.0
- certifi ==2022.6.15
- charset-normalizer ==2.1.0
- click ==8.1.3
- click-plugins ==1.1.1
- cligj ==0.7.2
- cloudpickle ==2.1.0
- cycler ==0.11.0
- fonttools ==4.33.3
- geopandas ==0.11.0
- gym ==0.21.0
- gym-notices ==0.0.7
- idna ==3.3
- importlib-metadata ==4.12.0
- kiwisolver ==1.4.3
- matplotlib ==3.5.2
- munch ==2.5.0
- networkx ==2.8.4
- numpy ==1.23.0
- osmnx ==1.2.1
- packaging ==21.3
- pandas ==1.4.3
- pyparsing ==3.0.9
- pyproj ==3.3.1
- python-dateutil ==2.8.2
- pytz ==2022.1
- requests ==2.28.1
- six ==1.16.0
- stable-baselines3 ==1.5.0
- torch ==1.12.0
- typing_extensions ==4.2.0
- urllib3 ==1.26.9
- zipp ==3.8.0