synapse-rl
Synapse RL: A PyTorch Framework for Reinforcement Learning
Science Score: 67.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 3 DOI reference(s) in README -
✓Academic publication links
Links to: zenodo.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (9.9%) to scientific vocabulary
Keywords
Repository
Synapse RL: A PyTorch Framework for Reinforcement Learning
Basic Info
Statistics
- Stars: 9
- Watchers: 1
- Forks: 1
- Open Issues: 0
- Releases: 1
Topics
Metadata Files
README.md
Synapse Reinforcement Learning
Synapse is a framework for implementing Reinforcement Learning (RL) algorithms in PyTorch. The repository includes popular algorithms such as Deep Q-Networks, Policy Gradients, and Actor-Critic, as well as others.
One of the advantages of using Synapse-RL is its compatibility with gym-based environments. Gym provides a standard interface for working with environments to benchmark RL models. Synapse-RL also includes various utility functions and classes that make it easy to experiment with different hyperparameters, test different training approaches, and visualize training results.
Colab
Supported Algorithms
| RL Algorithm | Description |
| --- | --- |
| Deep Q Learning | Discrete |
| Policy Gradient | Discrete |
| Actor Critic (A2C) | Discrete |
| Deep Deterministic Policy Gradient (DDGP) | Continuous |
| Soft Actor Critic (SAC) | Continuous |
| Proximal Policy Optimization (PPO) | Continuous |
Tensorboard
Synapse now supports tensorboard.
bash
tensorboard --logdir ./
Inference
```python import gymnasium as gym from syn_rl import SAC
Initialize the Pendulum/MountainCar environment and agent
env = gym.make('Pendulum-v1', g=9.81) statesize = env.observationspace.shape[0] actionsize = env.actionspace.shape[0] agent = SAC(statesize, actionsize, actionrange=[env.actionspace.low, env.actionspace.high], hiddendim=[128]) result = agent.train(env, episodes=500) ```
Citation
Owner
- Name: Amirhossein Heydarian Ardakani
- Login: arbit3rr
- Kind: user
- Website: https://scholar.google.com/citations?user=5U9fGhYAAAAJ&hl=en
- Repositories: 13
- Profile: https://github.com/arbit3rr
Citation (CITATION.cff)
# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!
cff-version: 1.2.0
title: Synapse RL
message: PyTorch framework for reinforcement learning
type: software
authors:
- given-names: Amirhossein
family-names: Heydarian Ardakani
email: amirhossein261077@live.com
identifiers:
- type: doi
value: 10.5281/zenodo.8009955
repository-code: 'https://github.com/amirhosseinh77/Synapse-RL'
date-released: '2022-11-22'
GitHub Events
Total
- Watch event: 1
- Push event: 39
Last Year
- Watch event: 1
- Push event: 39
