control-neuralode

Neural ODEs as Feedback Policies for Nonlinear Optimal Control (IFAC 2023) https://doi.org/10.1016/j.ifacol.2023.10.1248

https://github.com/ilyaorson/control-neuralode

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
    Found 3 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.0%) to scientific vocabulary

Keywords

adjoint-sensitivities automatic-differentiation julia neural-networks neuralode optimal-control optimization ordinary-differential-equations
Last synced: 6 months ago · JSON representation ·

Repository

Neural ODEs as Feedback Policies for Nonlinear Optimal Control (IFAC 2023) https://doi.org/10.1016/j.ifacol.2023.10.1248

Basic Info
Statistics
  • Stars: 16
  • Watchers: 2
  • Forks: 1
  • Open Issues: 0
  • Releases: 0
Topics
adjoint-sensitivities automatic-differentiation julia neural-networks neuralode optimal-control optimization ordinary-differential-equations
Created over 5 years ago · Last pushed over 2 years ago
Metadata Files
Readme License Citation

README.md

Feedback Control Policies with Neural ODEs

This code showcases how a state-feedback neural policy, as commonly used in reinforcement learning, may be used similarly in an optimal control problem while enforcing state and control constraints.

Constrained Van der Pol problem

$$\begin{equation} \begin{aligned} \min{\theta} \quad & J = \int0^5 x1^2 + x2^2 + u^2 \,dt, \ & \ \textrm{s.t.} \quad & \dot x1(t) = x1(1-x2^2) - x2 + u, \ & \dot x2(t) = x1, \ & u(t) = \pi\theta^2(x1, x2) \ & \ & x(t0) = (0, 1), \ & \ & x_1(t) + 0.4 \geq 0, \ & -0.3 \leq u(t) \leq 1, \ \end{aligned} \end{equation}$$

Phase Space with embedded policy (before and after optimization)

vdp_initial vdp_constrained

JuliaCon 2021

The main ideas where presented in this talk of JuliaCon2021.

Watch the video

Running the study cases

This code requires Julia 1.7.

Reproduce the environment with the required dependencies: julia julia> using Pkg; Pkg.activate(;temp=true) julia> Pkg.add(url="https://github.com/IlyaOrson/ControlNeuralODE.jl")

Run the test cases:

julia julia> using ControlNeuralODE: van_der_pol, bioreactor, batch_reactor, semibatch_reactor julia> van_der_pol(store_results=true)

This will generate plots while the optimization runs and store result data in data/.

Methodology (Control Vector Iteration for the Neural Policy parameters)

By substituting the control function of the problem by the output of the policy, the weights of the controller become the new unconstrained controls of the system. The problem becomes a parameter estimation problem where the Neural ODE adjoint method may be used to backpropagate sensitivities with respect to functional cost.

This method was originally called the Kelley-Bryson gradient procedure (developed in the 60s); which is historically interesting due to being one of the earliest uses of backpropagation. Its continuous time extension is known as Control Vector Iteration (CVI) in the optimal control literature, where it shares the not so great reputation of indirect methods.

Its modern implementation depends crucially on automatic differentiation to avoid the manual derivations; one of the features that made the original versions unattractive. This is where DiffEqFlux.jl and similar software shine. See the publication for a clear explanation of the technical details.

Acknowledgements

The idea was inspired heavily by the trebuchet demo of Flux and the differentiable control example of DiffEqFlux. A similar idea was contrasted with reinforcement learning in this work. Chris Rackauckas advise was very useful.

Citation

If you find this work helpful please consider citing the following paper: bibtex @misc{https://doi.org/10.48550/arxiv.2210.11245, doi = {10.48550/ARXIV.2210.11245}, url = {https://arxiv.org/abs/2210.11245}, author = {Sandoval, Ilya Orson and Petsagkourakis, Panagiotis and del Rio-Chanona, Ehecatl Antonio}, keywords = {Optimization and Control (math.OC), Artificial Intelligence (cs.AI), Systems and Control (eess.SY), FOS: Mathematics, FOS: Mathematics, FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering}, title = {Neural ODEs as Feedback Policies for Nonlinear Optimal Control}, publisher = {arXiv}, year = {2022}, copyright = {arXiv.org perpetual, non-exclusive license} }

Owner

  • Name: Ilya Orson
  • Login: IlyaOrson
  • Kind: user
  • Location: London, UK
  • Company: The Alan Turing Institute | Imperial College London

RL for cyberdefence @ TheAlanTuringInstitute PhD candidate @ ImperialCollege Interests: RL and optimal control 🤖 Previously: Data scientist & physicts 🚀

Citation (CITATION.bib)

@article{sandoval2022NODEpolicies,
  title    = {Neural ODEs as Feedback Policies for Nonlinear Optimal Control},
  author   = {Sandoval, Ilya Orson and Petsagkourakis, Panagiotis and del Rio-Chanona, Ehecatl Antonio},
  date     = {2022},
  pubstate = {submitted},
  note     = {submitted},
}

GitHub Events

Total
  • Watch event: 2
  • Fork event: 1
Last Year
  • Watch event: 2
  • Fork event: 1

Issues and Pull Requests

Last synced: about 1 year ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Dependencies

environment.yml conda
  • appdirs 1.4.4
  • attrs 20.3.0
  • backcall 0.2.0
  • black 19.10b0
  • blas 1.0
  • ca-certificates 2021.4.13
  • certifi 2020.12.5
  • click 7.1.2
  • colorama 0.4.4
  • cycler 0.10.0
  • decorator 5.0.6
  • freetype 2.10.4
  • icc_rt 2019.0.0
  • icu 58.2
  • intel-openmp 2021.2.0
  • ipopt 3.13.4
  • ipython 7.22.0
  • ipython_genutils 0.2.0
  • jedi 0.17.2
  • jpeg 9b
  • kiwisolver 1.3.1
  • libblas 3.9.0
  • libcblas 3.9.0
  • libflang 5.0.0
  • liblapack 3.9.0
  • libpng 1.6.37
  • libtiff 4.2.0
  • llvm-meta 5.0.0
  • lz4-c 1.9.3
  • m2w64-gcc-libgfortran 5.3.0
  • m2w64-gcc-libs 5.3.0
  • m2w64-gcc-libs-core 5.3.0
  • m2w64-gmp 6.1.0
  • m2w64-libwinpthread-git 5.0.0.4634.697f757
  • matplotlib 3.3.4
  • matplotlib-base 3.3.4
  • metis 5.1.0
  • mkl 2021.2.0
  • mkl-service 2.3.0
  • mkl_fft 1.3.0
  • mkl_random 1.2.1
  • msys2-conda-epoch 20160418
  • mumps-seq 5.2.1
  • mypy_extensions 0.4.1
  • numpy 1.20.1
  • numpy-base 1.20.1
  • olefile 0.46
  • openmp 5.0.0
  • openssl 1.1.1k
  • parso 0.7.0
  • pathspec 0.7.0
  • pickleshare 0.7.5
  • pillow 8.2.0
  • pip 21.0.1
  • prompt-toolkit 3.0.17
  • pygments 2.8.1
  • pyparsing 2.4.7
  • pyqt 5.9.2
  • python 3.9.4
  • python-dateutil 2.8.1
  • python_abi 3.9
  • qt 5.9.7
  • regex 2021.4.4
  • scipy 1.6.2
  • setuptools 52.0.0
  • sip 4.19.13
  • six 1.15.0
  • sqlite 3.35.4
  • tk 8.6.10
  • toml 0.10.2
  • tornado 6.1
  • traitlets 5.0.5
  • typed-ast 1.4.2
  • typing_extensions 3.7.4.3
  • tzdata 2020f
  • vc 14.2
  • vs2015_runtime 14.27.29016
  • wcwidth 0.2.5
  • wheel 0.36.2
  • wincertstore 0.2
  • xz 5.2.5
  • zlib 1.2.11
  • zstd 1.4.5