haleqo.jl
HALeqO solver for nonlinear equality-constrained optimization
Science Score: 57.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 3 DOI reference(s) in README -
○Academic publication links
-
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (13.0%) to scientific vocabulary
Keywords
Repository
HALeqO solver for nonlinear equality-constrained optimization
Basic Info
Statistics
- Stars: 5
- Watchers: 1
- Forks: 0
- Open Issues: 2
- Releases: 4
Topics
Metadata Files
README.md
HALeqO.jl
Homotopy Augmented Lagrangian method for EQuality-constrained Optimization
HALeqO.jl is a pure Julia implementation of a solver for continuous nonlinear equality-constrained optimization problems of the form
min f(x) over x in R^n subject to c(x) = 0
based on a homotopy augmented Lagrangian method and globalised Newton's steps with Armijo's linesearch. To invoke the haleqo solver, you have to pass it an NLPModel; it returns a GenericExecutionStats.
using NLPModels, HALeqO
out = haleqo(nlp)
You can solve an JuMP model m by using NLPModels to convert it.
using NLPModelsJuMP, HALeqO
nlp = MathOptNLPModel(m)
out = haleqo(nlp)
Linear solver
HALeqO.jl uses the free QDLDL.jl routines as main linear solver and PositiveFactorizations.jl for regularizing the Hessian matrix. These could be replaced by, or complemented with, LDLFactorizations.jl and HSL.jl's MA57 based on HSL.
Citing
If you find this code useful, you can cite the related paper as
@inproceedings{demarchi2021augmented,
author = {De~Marchi, Alberto},
title = {Augmented {L}agrangian methods as dynamical systems for constrained optimization},
year = {2021},
month = {12},
pages = {6533--6538},
booktitle = {2021 60th {IEEE} {C}onference on {D}ecision and {C}ontrol ({CDC})},
doi = {10.1109/CDC45484.2021.9683199},
}
Benchmarks
We compared HALeqO against Ipopt, via the wrapper provided by NLPModelsIpopt, and NCL.jl invoking Ipopt.
There is also Percival now.
See run_benchmarks.jl in the tests folder.
To use the provided test codes (originally compatible with Julia version 1.8.5, tested on Linux x86_64, August 2021):
start Julia from this directory with
julia --project=.(or with the relative path to theHALeqOdirectory from somewhere else);do
]instantiateto download all dependencies (only required the first time) and go back to standard Julia prompt with backspace afterwards;load the package with
using HALeqO
To run the CUTEst benchmarks:
- do
include("tests/run_benchmarks.jl")to invoke the solvers, generate the results, print statistics and save figures.
Options for benchmarking (e.g., max time and tolerance) can be modified at the beginning of the tests/run_benchmarks.jl file.
Owner
- Name: Alberto De Marchi
- Login: aldma
- Kind: user
- Location: Europe
- Website: aldma.github.io
- Repositories: 10
- Profile: https://github.com/aldma
Citation (CITATION.bib)
@inproceedings{demarchi2021augmented,
author = {De~Marchi, Alberto},
title = {Augmented {L}agrangian methods as dynamical systems for constrained optimization},
year = {2021},
month = {12},
pages = {6533--6538},
booktitle = {2021 60th {IEEE} {C}onference on {D}ecision and {C}ontrol ({CDC})},
doi = {10.1109/CDC45484.2021.9683199},
}
GitHub Events
Total
Last Year
Committers
Last synced: about 1 year ago
Top Committers
| Name | Commits | |
|---|---|---|
| Alberto De Marchi | a****2@l****t | 16 |
| Alberto De Marchi | a****i@g****m | 13 |