stable-learning-control

A framework for training theoretically stable (and robust) Reinforcement Learning control algorithms.

https://github.com/rickstaa/stable-learning-control

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org, zenodo.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.3%) to scientific vocabulary

Keywords

artificial-intelligence control deep-learning framework gaussian-networks gymnasium machine-learning neural-networks openai-gym reinforcement-learning reinforcement-learning-agents reinforcement-learning-algorithms robustness simulation stability
Last synced: 4 months ago · JSON representation ·

Repository

A framework for training theoretically stable (and robust) Reinforcement Learning control algorithms.

Basic Info
Statistics
  • Stars: 6
  • Watchers: 3
  • Forks: 1
  • Open Issues: 4
  • Releases: 98
Topics
artificial-intelligence control deep-learning framework gaussian-networks gymnasium machine-learning neural-networks openai-gym reinforcement-learning reinforcement-learning-agents reinforcement-learning-algorithms robustness simulation stability
Created over 5 years ago · Last pushed over 1 year ago
Metadata Files
Readme Changelog Contributing License Citation Zenodo

README.md

Stable Learning Control

Stable Learning Control GitHub release (latest by date) Python 3 codecov Contributions DOI Weights & Biases dashboard

Package Overview

The Stable Learning Control (SLC) framework is a collection of robust Reinforcement Learning control algorithms designed to ensure stability. These algorithms are built upon the Lyapunov actor-critic architecture introduced by Han et al. 2020. They guarantee stability and robustness by leveraging Lyapunov stability theory. These algorithms are specifically tailored for use with gymnasium environments that feature a positive definite cost function. Several ready-to-use compatible environments can be found in the stable-gym package.

Installation and Usage

Please see the docs for installation and usage instructions.

Contributing

We use husky pre-commit hooks and github actions to enforce high code quality. Please check the contributing guidelines before contributing to this repository.

[!NOTE]\ We used husky instead of pre-commit, which is more commonly used with Python projects. This was done because only some tools we wanted to use were possible to integrate the Please feel free to open a PR if you want to switch to pre-commit if this is no longer the case.

References

  • Han et al. 2020 - Used as a basis for the Lyapunov actor-critic architecture.
  • Spinningup - Used as a basis for the code structure.

Owner

  • Name: Rick Staa
  • Login: rickstaa
  • Kind: user
  • Location: Amsterdam
  • Company: Livepeer

Building the future of video AI @livepeer 🚀 | Open-source advocate & tech enthusiast | Robotics & AI researcher | Jazz/blues enthusiast 🎹.

Citation (CITATION.cff)

# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!
cff-version: 1.2.0
title: rickstaa/stable-learning-control
message: >-
  If you want to cite this software, please cite it using the
  metadata from this file.
type: software
authors:
  - given-names: Rick
    family-names: Staa
    affiliation: TU Delft
    orcid: 'https://orcid.org/0000-0003-4835-2040'
  - given-names: Wei
    family-names: Pan
    affiliation: The University of Manchester
    orcid: 'https://orcid.org/0000-0003-1121-9879'
identifiers:
  - type: url
    value: 'https://zenodo.org/badge/latestdoi/271989240'
repository-code: 'https://github.com/rickstaa/stable-learning-control'
abstract: >-
  A framework for training theoretically stable (and robust)
  Reinforcement Learning control algorithms.
keywords:
  - reinforcement-learning
  - control
  - stability
  - robustness
  - simulation
  - openai-gym
  - gymnasium
  - artificial-intelligence
  - deep-learning
  - neural-networks
  - machine-learning
  - framework
  - gaussian-networks
license: MIT

GitHub Events

Total
Last Year

Issues and Pull Requests

Last synced: over 1 year ago

All Time
  • Total issues: 31
  • Total pull requests: 169
  • Average time to close issues: 2 months
  • Average time to close pull requests: 3 days
  • Total issue authors: 2
  • Total pull request authors: 4
  • Average comments per issue: 1.26
  • Average comments per pull request: 0.34
  • Merged pull requests: 147
  • Bot issues: 1
  • Bot pull requests: 82
Past Year
  • Issues: 0
  • Pull requests: 75
  • Average time to close issues: N/A
  • Average time to close pull requests: 3 days
  • Issue authors: 0
  • Pull request authors: 2
  • Average comments per issue: 0
  • Average comments per pull request: 0.21
  • Merged pull requests: 68
  • Bot issues: 0
  • Bot pull requests: 36
Top Authors
Issue Authors
  • rickstaa (31)
  • renovate[bot] (1)
Pull Request Authors
  • rickstaa (115)
  • dependabot[bot] (63)
  • renovate[bot] (20)
  • github-actions[bot] (13)
Top Labels
Issue Labels
bug (5) wontfix (4) enhancement (4) training report (3)
Pull Request Labels
dependencies (62) javascript (51) autorelease: tagged (7) github_actions (7) autorelease: pending (5) CI (3)