Argos

Reduced-space optimization, for optimal power flow.

https://github.com/exanauts/argos.jl

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: zenodo.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.9%) to scientific vocabulary

Keywords

gpu julia optimization
Last synced: 6 months ago · JSON representation ·

Repository

Reduced-space optimization, for optimal power flow.

Basic Info
  • Host: GitHub
  • Owner: exanauts
  • License: mit
  • Language: Julia
  • Default Branch: master
  • Homepage:
  • Size: 2.64 MB
Statistics
  • Stars: 21
  • Watchers: 7
  • Forks: 3
  • Open Issues: 2
  • Releases: 6
Topics
gpu julia optimization
Created over 5 years ago · Last pushed 11 months ago
Metadata Files
Readme License Citation

README.md

Argos.jl

DOI

Argos.jl extends the power-system modeler ExaPF.jl and the interior-point solver MadNLP.jl to solve optimal power flow (OPF) problems entirely in Julia.

The package is structured as follows: - in src/Evaluators/, various optimization evaluators implement the different callbacks (objective, gradient, Hessian) required in the optimization algorithms. - in src/Algorithms/, an Augmented Lagrangian algorithm is implemented, targeting primarily the resolution of large-scale OPF problems on GPU architectures. - in src/Wrappers/, a wrapper for MathOptInterface and a wrapper for NLPModels.jl are implemented.

Installation

One can install Argos with the default package manager: julia add Argos

To check that everything is working as expected, please run julia test Argos

By default, this command tests all the Evaluators implemented in Argos on the CPU and, if available, on a CUDA GPU.

Quickstart

The function run_opf is the entry point to Argos. It takes as input a path to a MATPOWER file and solves the associated OPF with MadNLP: ```julia

Solve in the full-space

ips = Argos.run_opf("data/case9.m", Argos.FullSpace())

The second argument specifies the formulation used inside MadNLP to solve the OPF problem. `FullSpace()` implements the classical full-space formulation, (as implemented inside [MATPOWER](https://matpower.org/) or [PowerModels.jl](https://github.com/lanl-ansi/PowerModels.jl)). Alternatively, one may want to solve the OPF using the reduced-space formulation of Dommel and Tinney: julia

Solve in the reduced-space

ips = Argos.run_opf("data/case9.m", Argos.DommelTinney())

```

How to use Argos' evaluators?

Argos implements two evaluators to solve the OPF problem: the FullSpaceEvaluator implements the classical OPF formulation in the full-space, whereas ReducedSpaceEvaluator implements the reduced-space formulation of Dommel & Tinney.

Using an evaluator

Instantiating a new evaluator from a MATPOWER file simply amounts to ```julia

Reduced-space evaluator

nlp = Argos.ReducedSpaceEvaluator("case57.m")

Full-space evaluator

flp = Argos.FullSpaceEvaluator("case57.m") ```

An initial optimization variable can be computed as julia u = Argos.initial(nlp) The variable u is the control that will be used throughout the optimization. Once a new point u obtained, one can refresh all the structures inside nlp with: julia Argos.update!(nlp, u) Once the structures are refreshed, the other callbacks can be evaluated as well: julia Argos.objective(nlp, u) # objective Argos.gradient(nlp, u) # reduced gradient Argos.jacobian(nlp, u) # reduced Jacobian Argos.hessian(nlp, u) # reduced Hessian

MOI wrapper

Argos implements a wrapper to MathOptInterface to solve the optimal power flow problem with any nonlinear optimization solver compatible with MathOptInterface: ```julia nlp = Argos.ReducedSpaceEvaluator("case57.m") optimizer = Ipopt.Optimizer() # MOI optimizer

Update tolerance to be above tolerance of Newton-Raphson subsolver

MOI.set(optimizer, MOI.RawOptimizerAttribute("tol"), 1e-5)

Solve reduced space problem

solution = Argos.optimize!(optimizer, nlp) ```

NLPModels wrapper

Alternatively, one can use NLPModels.jl to wrap any evaluators implemented in Argos. This amounts simply to: ```julia nlp = Argos.FullSpaceEvaluator("case57.m")

Wrap in NLPModels

model = Argos.OPFModel(nlp)

x0 = NLPModels.get_x0(model) obj = NLPModels.obj(model, x0)

``` Once the evaluator is wrapped inside NLPModels.jl, we can leverage any solver implemented in JuliaSmoothOptimizers to solve the OPF problem.

How to deport the solution of the OPF on the GPU?

ExaPF.jl is using KernelAbstractions to implement all its core operations. Hence, deporting the computation on GPU accelerators is straightforward. Argos.jl inherits this behavior and all evaluators can be instantiated on GPU accelerators, simply as julia using CUDAKernels # Load CUDA backend for KernelAbstractions using ArgosCUDA nlp = Argos.ReducedSpaceEvaluator("case57.m"; device=CUDADevice()) When doing so, all kernels are instantiated on the GPU to avoid memory transfer between the host and the device. The sparse linear algebra operations are handled by cuSPARSE, and the sparse factorizations are performed using cusolverRF via the Julia wrapper CUSOLVERRF.jl. This package is loaded via the included ArgosCUDA.jl package in /lib. When deporting the computation on the GPU, the reduced Hessian can be evaluated in parallel.

Batch evaluation of the reduced Hessian

Instead of computing the reduced Hessian one Hessian-vector product after one Hessian-vector product, the Hessian-vector products can be evaluated in batch. To activate the batch evaluation for the reduced Hessian, please specify the number of Hessian-vector products to perform in one batch as julia nlp = Argos.ReducedSpaceEvaluator("case57.m"; device=CUDADevice(), nbatch_hessian=8) Note that on large instances, the batch computation can be demanding in terms of GPU's memory.

Owner

  • Name: Exanauts
  • Login: exanauts
  • Kind: organization

An eclectic collection of ECP ExaSGD project codes

Citation (CITATION.cff)

cff-version: 1.2.0
message: "Cite this paper whenever you use Argos.jl"
authors:
  - family-names: Pacaud
    given-names: François
  - family-names: Shin
    given-names: Sungho
  - family-names: Schanen
    given-names: Michel
  - family-names: Maldonado
    given-names: Daniel Adrian
  - family-names: Anitescu
    given-names: Mihai
title: "Condensed interior-point methods: porting reduced-space approaches on GPU hardware"
doi: 10.48550/ARXIV.2203.11875
year: 2022
publisher:
  name: "arXiv"

GitHub Events

Total
  • Watch event: 3
  • Push event: 24
Last Year
  • Watch event: 3
  • Push event: 24

Issues and Pull Requests

Last synced: 7 months ago

All Time
  • Total issues: 8
  • Total pull requests: 65
  • Average time to close issues: 4 months
  • Average time to close pull requests: 6 days
  • Total issue authors: 2
  • Total pull request authors: 4
  • Average comments per issue: 1.0
  • Average comments per pull request: 0.34
  • Merged pull requests: 58
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • frapac (7)
  • JuliaTagBot (1)
Pull Request Authors
  • frapac (58)
  • michel2323 (5)
  • amontoison (2)
  • degleris1 (1)
Top Labels
Issue Labels
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads:
    • julia 1 total
  • Total dependent packages: 0
  • Total dependent repositories: 0
  • Total versions: 6
juliahub.com: Argos

Reduced-space optimization, for optimal power flow.

  • Versions: 6
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 1 Total
Rankings
Dependent repos count: 9.9%
Stargazers count: 26.7%
Average: 27.2%
Forks count: 33.3%
Dependent packages count: 38.9%
Last synced: 7 months ago

Dependencies

.github/workflows/DocsCleanup.yml actions
  • actions/checkout v2 composite
.github/workflows/TagBot.yml actions
  • JuliaRegistries/TagBot v1 composite
.github/workflows/action.yml actions