graphenv
graphenv: a Python library for reinforcement learning on graph search spaces - Published in JOSS (2022)
Science Score: 93.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 4 DOI reference(s) in README and JOSS metadata -
✓Academic publication links
Links to: joss.theoj.org, zenodo.org -
○Committers with academic emails
-
○Institutional organization owner
-
✓JOSS paper metadata
Published in Journal of Open Source Software
Keywords
Keywords from Contributors
Scientific Fields
Repository
Reinforcement learning for combinatorial optimization over directed graphs
Basic Info
- Host: GitHub
- Owner: NREL
- License: bsd-3-clause
- Language: Python
- Default Branch: main
- Homepage: https://NREL.github.io/graph-env/
- Size: 8.41 MB
Statistics
- Stars: 41
- Watchers: 7
- Forks: 9
- Open Issues: 1
- Releases: 18
Topics
Metadata Files
README.md
graph-env
The graphenv Python library is designed to
1) make graph search problems more readily expressible as RL problems via an extension of the OpenAI gym API while
2) enabling their solution via scalable learning algorithms in the popular RLLib library.
RLLib provides out-of-the-box support for both parametrically-defined actions and masking of invalid actions. However, native support for action spaces where the action choices change for each state is challenging to implement in a computationally efficient fashion. The graphenv library provides utility classes that simplify the flattening and masking of action observations for choosing from a set of successor states at every node in a graph search.
Installation
Graphenv can be installed with pip:
pip install graphenv
Quick Start
graph-env allows users to create a customized graph search by subclassing the Vertex class. Basic examples are provided in the graphenv/examples folder. The following code snippet shows how to randomly sample from valid actions for a random walk down a 1D corridor:
```python import random from graphenv.examples.hallway.hallwaystate import HallwayState from graphenv.graphenv import GraphEnv
state = HallwayState(corridorlength=10) env = GraphEnv({"state": state, "maxnum_children": 2})
obs = env.makeobservation() done = False totalreward = 0
while not done: action = random.choice(range(len(env.state.children))) obs, reward, terminated, truncated, info = env.step(action) done = terminated or truncated total_reward += reward ```
Additional details on this example are given in the documentation
Documentation
The documentation is hosted on GitHub Pages
Contributing
We welcome bug reports, suggestions for new features, and pull requests. See our contributing guidelines for more details.
License
graph-env is licensed under the BSD 3-Clause License.
Copyright (c) 2022, Alliance for Sustainable Energy, LLC
Owner
- Name: National Renewable Energy Laboratory
- Login: NREL
- Kind: organization
- Location: Golden, CO
- Website: http://www.nrel.gov
- Repositories: 599
- Profile: https://github.com/NREL
JOSS Publication
graphenv: a Python library for reinforcement learning on graph search spaces
Authors
Computational Sciences Center, National Renewable Energy Laboratory, Golden CO 80401, USA
Computational Sciences Center, National Renewable Energy Laboratory, Golden CO 80401, USA
Computational Sciences Center, National Renewable Energy Laboratory, Golden CO 80401, USA
Tags
reinforcement learning graph search combinatorial optimizationCodeMeta (codemeta.json)
{
"@context": "https://raw.githubusercontent.com/codemeta/codemeta/master/codemeta.jsonld",
"@type": "Code",
"author": [
{
"@id": "0000-0002-7928-3722",
"@type": "Person",
"email": "peter.stjohn@nrel.gov",
"name": "Peter St. John",
"affiliation": "Biosciences Center, National Renewable Energy Laboratory, Golden CO 80401, USA"
},
{
"@id": "0000-0001-6140-1957",
"@type": "Person",
"email": "Dave.Biagioni@nrel.gov",
"name": "Dave Biagioni",
"affiliation": "Computational Sciences Center, National Renewable Energy Laboratory, Golden CO 80401, USA"
},
{
"@id": "0000-0003-0078-6560",
"@type": "Person",
"email": "Struan.Clark@nrel.gov",
"name": "Struan Clark",
"affiliation": "Computational Sciences Center, National Renewable Energy Laboratory, Golden CO 80401, USA"
},
{
"@id": "0000-0002-5867-3561",
"@type": "Person",
"email": "Charles.Tripp@nrel.gov",
"name": "Charles Tripp",
"affiliation": "Computational Sciences Center, National Renewable Energy Laboratory, Golden CO 80401, USA"
},
{
"@id": "0000-0001-5132-0168",
"@type": "Person",
"email": "Dmitry.Duplyakin@nrel.gov",
"name": "Dmitry Duplyakin",
"affiliation": "Computational Sciences Center, National Renewable Energy Laboratory, Golden CO 80401, USA"
},
{
"@id": "0000-0003-2828-1273",
"@type": "Person",
"email": "Jeffrey.Law@nrel.gov",
"name": "Jeffrey Law",
"affiliation": "Biosciences Center, National Renewable Energy Laboratory, Golden CO 80401, USA"
}
],
"identifier": "",
"codeRepository": "https://github.com/NREL/graph-env",
"datePublished": "2022-07-21",
"dateModified": "2022-07-21",
"dateCreated": "2022-07-21",
"description": "Reinforcement learning on directed graphs",
"keywords": "reinforcement learning, combinatorial optimization",
"license": "BSD 3-Clause License",
"title": "graph-env",
"version": "v0.0.6"
}
GitHub Events
Total
- Watch event: 3
- Fork event: 6
Last Year
- Watch event: 3
- Fork event: 6
Committers
Last synced: 5 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Peter St. John | p****n@g****m | 132 |
| Biagioni | d****n@n****v | 27 |
| Peter St. John | p****n@n****v | 21 |
| Jeff Law | j****w@n****v | 15 |
| Struan Clark | x****n | 14 |
| Dave Biagioni | d****i@n****v | 14 |
| ctripp | c****p@n****v | 10 |
| Dave Biagioni | d****i | 2 |
| dependabot[bot] | 4****] | 1 |
| Konstantinos Mixios | k****s@g****m | 1 |
| Dmitry Duplyakin | d****n@n****v | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 4 months ago
All Time
- Total issues: 15
- Total pull requests: 39
- Average time to close issues: 19 days
- Average time to close pull requests: about 1 month
- Total issue authors: 6
- Total pull request authors: 7
- Average comments per issue: 1.0
- Average comments per pull request: 0.46
- Merged pull requests: 37
- Bot issues: 0
- Bot pull requests: 1
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- pstjohn (6)
- osorensen (3)
- davebiagioni (2)
- vwxyzjn (2)
- mjstahlberg (1)
- sundar19 (1)
Pull Request Authors
- pstjohn (27)
- davebiagioni (7)
- jlaw9 (2)
- iammix (1)
- dependabot[bot] (1)
- xtruan (1)
- dmdu (1)
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- JamesIves/github-pages-deploy-action v4.2.5 composite
- actions/checkout v3 composite
- actions/setup-python v4 composite
- actions/checkout v2 composite
- actions/upload-artifact v1 composite
- openjournals/openjournals-draft-action master composite
- actions/checkout v3 composite
- actions/setup-python v4 composite
- actions/setup-python v2 composite
- pypa/gh-action-pypi-publish release/v1 composite
- Sphinx ==4.5.0
- ipython ==8.2.0
- nbconvert ==6.5.1
- nbsphinx ==0.8.8
- sphinx-rtd-theme ==1.0.0
- gym
- networkx
- pip
- pytest
- pytorch
- tensorflow