riemannian-po-for-lqg

This is an implementation of my paper Dynamic Output-feedback Synthesis Orbit Geometry: Quotient Manifolds and LQG Direct Policy Optimization.

https://github.com/rainlabuw/riemannian-po-for-lqg

Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (4.1%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

This is an implementation of my paper Dynamic Output-feedback Synthesis Orbit Geometry: Quotient Manifolds and LQG Direct Policy Optimization.

Basic Info
  • Host: GitHub
  • Owner: Rainlabuw
  • Language: Python
  • Default Branch: main
  • Size: 234 KB
Statistics
  • Stars: 0
  • Watchers: 0
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created almost 2 years ago · Last pushed over 1 year ago
Metadata Files
Readme Citation

README.md

Riemannian-PO-for-LQG

This is an implementation of the policy optimization algorithm introduced in my paper Dynamic Output-feedback Synthesis Orbit Geometry: Quotient Manifolds and LQG Direct Policy Optimization.

This repo holds the methods for conducting gradient descent (GD) on the Linear-Quadratic-Gaussian (LQG) cost. It also performs Riemannian gradient descent (RGD) with respect to the Krishnaprasad-Martin (KM) metric introduced in the above paper.

LQG_methods.py holds the methods container of everything needed to run policy optimization on your LQG problem setup.

main.py runs a simple GD and RGD on a randomly generated system

experiment.py runs the exact experiment I included in the paper above.

Feel free to ask any questions on confusing parts, or raise issues for hidden bugs! :)

-Spencer Kraisler

Owner

  • Name: RAIN Lab
  • Login: Rainlabuw
  • Kind: organization
  • Email: mesbahi@uw.edu
  • Location: United States of America

Citation (CITATION.cff)

# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!

cff-version: 1.2.0
title: Riemannian-PO-for-LQG
message: >-
  If you use this software, please cite it using the
  metadata from this file.
type: software
authors:
  - given-names: Spencer
    family-names: Kraisler
    email: spencerkraisler@gmail.com
    affiliation: University of Washington
    orcid: 'https://orcid.org/0009-0009-4674-0104'
  - given-names: Mehran
    family-names: Mesbahi
    email: mesbahi@uw.edu
    affiliation: University of Washington
repository-code: 'https://github.com/Rainlabuw/Riemannian-PO-for-LQG'
abstract: >-
  This is an implementation of my paper Dynamic
  Output-feedback Synthesis Orbit Geometry: Quotient
  Manifolds and LQG Direct Policy Optimization by Kraisler
  and Mesbahi.


  This repo is an implementation of gradient descent (GD) on
  the Linear-Quadratic-Gaussian (LQG) cost. It also performs
  Riemannian gradient descent (RGD) with respect to the
  Krishnaprasad-Martin (KM) metric introduced in the above
  paper.
keywords:
  - machine-learning
  - optimization
  - optimal-control
  - policy-optimization

GitHub Events

Total
Last Year