Science Score: 62.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
✓Institutional organization owner
Organization knowledgedefinednetworking has institutional domain (www.ac.upc.edu) -
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (10.0%) to scientific vocabulary
Repository
Basic Info
- Host: GitHub
- Owner: knowledgedefinednetworking
- License: bsd-3-clause
- Language: Python
- Default Branch: master
- Size: 438 KB
Statistics
- Stars: 225
- Watchers: 7
- Forks: 39
- Open Issues: 1
- Releases: 0
Metadata Files
README.md
Deep Reinforcement Learning meets Graph Neural Networks: exploring a routing optimization use case
Link to paper: [here]
P. Almasan, J. Suárez-Varela, A. Badia-Sampera, K. Rusek, P. Barlet-Ros, A. Cabellos-Aparicio.
Contact: felician.paul.almasan@upc.edu
Abstract
Recent advances in Deep Reinforcement Learning (DRL) have shown a significant improvement in decision-making problems. The networking community has started to investigate how DRL can provide a new breed of solutions to relevant optimization problems, such as routing. However, most of the state-of-the-art DRL-based networking techniques fail to generalize, this means that they can only operate over network topologies seen during training, but not over new topologies. The reason behind this important limitation is that existing DRL networking solutions use standard neural networks (e.g., fully connected), which are unable to learn graph-structured information. In this paper we propose to use Graph Neural Networks (GNN) in combination with DRL. GNN have been recently proposed to model graphs, and our novel DRL+GNN architecture is able to learn, operate and generalize over arbitrary network topologies. To showcase its generalization capabilities, we evaluate it on an Optical Transport Network (OTN) scenario, where the agent needs to allocate traffic demands efficiently. Our results show that our DRL+GNN agent is able to achieve outstanding performance in topologies unseen during training.
Instructions to execute
See the execution instructions
Description
To know more details about the implementation used in the experiments contact: felician.paul.almasan@upc.edu
Please cite the corresponding article if you use the code from this repository:
@article{almasan2019deep,
title={Deep reinforcement learning meets graph neural networks: Exploring a routing optimization use case},
author={Almasan, Paul and Su{\'a}rez-Varela, Jos{\'e} and Badia-Sampera, Arnau and Rusek, Krzysztof and Barlet-Ros, Pere and Cabellos-Aparicio, Albert},
journal={arXiv preprint arXiv:1910.07421},
year={2019}
}
Owner
- Name: Knowledge-Defined Networking
- Login: knowledgedefinednetworking
- Kind: organization
- Location: Barcelona
- Website: http://www.ac.upc.edu/en
- Repositories: 5
- Profile: https://github.com/knowledgedefinednetworking
Training datasets to encourage open research, development and benchmarking of Machine Learning algorithms applied to Computer Networks.
Citation (CITATION.cff)
cff-version: 1.2.0 message: "If you use this software, please cite it as below." authors: - family-names: "Almasan" given-names: "Paul" orcid: "https://orcid.org/0000-0003-3903-6759" title: "Code of DRL+GNN architecture in OTN" version: 1.0 date-released: 2021-11-22 url: "https://github.com/knowledgedefinednetworking/DRL-GNN"
GitHub Events
Total
- Watch event: 44
- Fork event: 11
Last Year
- Watch event: 44
- Fork event: 11
Dependencies
- gym *
- Keras-Preprocessing ==1.1.2
- Markdown ==3.3.6
- Pillow ==8.4.0
- Werkzeug ==2.0.2
- absl-py ==1.0.0
- astunparse ==1.6.3
- cachetools ==4.2.4
- certifi ==2021.10.8
- charset-normalizer ==2.0.7
- cloudpickle ==2.0.0
- cycler ==0.11.0
- flatbuffers ==2.0
- fonttools ==4.28.1
- gast ==0.4.0
- google-auth ==2.3.3
- google-auth-oauthlib ==0.4.6
- google-pasta ==0.2.0
- grpcio ==1.42.0
- gym ==0.21.0
- h5py ==3.6.0
- idna ==3.3
- importlib-metadata ==4.8.2
- keras ==2.7.0
- kiwisolver ==1.3.2
- libclang ==12.0.0
- matplotlib ==3.5.0
- networkx ==2.6.3
- numpy ==1.21.4
- oauthlib ==3.1.1
- opt-einsum ==3.3.0
- packaging ==21.3
- protobuf ==3.19.1
- pyasn1 ==0.4.8
- pyasn1-modules ==0.2.8
- pyparsing ==3.0.6
- python-dateutil ==2.8.2
- requests ==2.26.0
- requests-oauthlib ==1.3.0
- rsa ==4.7.2
- scipy ==1.7.2
- setuptools-scm ==6.3.2
- six ==1.16.0
- tensorboard ==2.7.0
- tensorboard-data-server ==0.6.1
- tensorboard-plugin-wit ==1.8.0
- tensorflow ==2.7.0
- tensorflow-estimator ==2.7.0
- tensorflow-io-gcs-filesystem ==0.22.0
- termcolor ==1.1.0
- tomli ==1.2.2
- typing-extensions ==4.0.0
- urllib3 ==1.26.7
- wrapt ==1.13.3
- zipp ==3.6.0