regularizedoptimization.jl

Algorithms for regularized optimization

https://github.com/juliasmoothoptimizers/regularizedoptimization.jl

Science Score: 67.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 3 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org, zenodo.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (13.3%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Algorithms for regularized optimization

Basic Info
  • Host: GitHub
  • Owner: JuliaSmoothOptimizers
  • License: other
  • Language: Julia
  • Default Branch: master
  • Homepage:
  • Size: 13.6 MB
Statistics
  • Stars: 13
  • Watchers: 3
  • Forks: 10
  • Open Issues: 26
  • Releases: 1
Created about 7 years ago · Last pushed 9 months ago
Metadata Files
Readme License Citation

README.md

RegularizedOptimization

CI codecov DOI

How to cite

If you use RegularizedOptimization.jl in your work, please cite using the format given in CITATION.bib.

Synopsis

This package contains solvers to solve regularized optimization problems of the form

minₓ f(x) + h(x)

where f: ℝⁿ → ℝ has Lipschitz-continuous gradient and h: ℝⁿ → ℝ is lower semi-continuous and proper. The smooth term f describes the objective to minimize while the role of the regularizer h is to select a solution with desirable properties: minimum norm, sparsity below a certain level, maximum sparsity, etc. Both f and h can be nonconvex.

Installation

To install the package, hit ] from the Julia command line to enter the package manager and type julia pkg> add https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl

What is Implemented?

Please refer to the documentation.

Related Software

References

  1. A. Y. Aravkin, R. Baraldi and D. Orban, A Proximal Quasi-Newton Trust-Region Method for Nonsmooth Regularized Optimization, SIAM Journal on Optimization, 32(2), pp.900–929, 2022. Technical report: https://arxiv.org/abs/2103.15993
  2. R. Baraldi, R. Kumar, and A. Aravkin (2019), Basis Pursuit De-noise with Non-smooth Constraints, IEEE Transactions on Signal Processing, vol. 67, no. 22, pp. 5811-5823.

bibtex @article{aravkin-baraldi-orban-2022, author = {Aravkin, Aleksandr Y. and Baraldi, Robert and Orban, Dominique}, title = {A Proximal Quasi-{N}ewton Trust-Region Method for Nonsmooth Regularized Optimization}, journal = {SIAM Journal on Optimization}, volume = {32}, number = {2}, pages = {900--929}, year = {2022}, doi = {10.1137/21M1409536}, abstract = { We develop a trust-region method for minimizing the sum of a smooth term (f) and a nonsmooth term (h), both of which can be nonconvex. Each iteration of our method minimizes a possibly nonconvex model of (f + h) in a trust region. The model coincides with (f + h) in value and subdifferential at the center. We establish global convergence to a first-order stationary point when (f) satisfies a smoothness condition that holds, in particular, when it has a Lipschitz-continuous gradient, and (h) is proper and lower semicontinuous. The model of (h) is required to be proper, lower semi-continuous and prox-bounded. Under these weak assumptions, we establish a worst-case (O(1/\epsilon^2)) iteration complexity bound that matches the best known complexity bound of standard trust-region methods for smooth optimization. We detail a special instance, named TR-PG, in which we use a limited-memory quasi-Newton model of (f) and compute a step with the proximal gradient method, resulting in a practical proximal quasi-Newton method. We establish similar convergence properties and complexity bound for a quadratic regularization variant, named R2, and provide an interpretation as a proximal gradient method with adaptive step size for nonconvex problems. R2 may also be used to compute steps inside the trust-region method, resulting in an implementation named TR-R2. We describe our Julia implementations and report numerical results on inverse problems from sparse optimization and signal processing. Both TR-PG and TR-R2 exhibit promising performance and compare favorably with two linesearch proximal quasi-Newton methods based on convex models. } }

Owner

  • Name: JuliaSmoothOptimizers
  • Login: JuliaSmoothOptimizers
  • Kind: organization
  • Location: DOI: 10.5281/zenodo.2655082

Infrastructure and Solvers for Continuous Optimization in Julia

JOSS Publication

RegularizedOptimization.jl: A Julia framework for regularized and nonsmooth optimization
Published
February 15, 2026
Volume 11, Issue 118, Page 9344
Authors
Maxence Gollier ORCID
GERAD and Department of Mathematics and Industrial Engineering, Polytechnique Montréal, QC, Canada
Mohamed Laghdaf Habiboullah ORCID
GERAD and Department of Mathematics and Industrial Engineering, Polytechnique Montréal, QC, Canada
Geoffroy Leconte ORCID
Hexaly, France
Robert Baraldi ORCID
Sandia National Laboratories, USA
Alberto De Marchi ORCID
University of the Bundeswehr Munich, Germany
Dominique Orban ORCID
GERAD and Department of Mathematics and Industrial Engineering, Polytechnique Montréal, QC, Canada
Youssef Diouane ORCID
GERAD and Department of Mathematics and Industrial Engineering, Polytechnique Montréal, QC, Canada
Editor
Fabian Scheipl ORCID
Tags
nonsmooth optimization nonconvex optimization regularization methods trust-region methods

Citation (CITATION.bib)

@Misc{baraldi-leconte-orban-regularized-optimization-2024,
  author = {R. Baraldi and G. Leconte and D. Orban},
  title = {{RegularizedOptimization.jl}: Algorithms for Regularized Optimization},
  month = {September},
  howpublished = {\url{https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl}},
  year = {2024},
  DOI = {10.5281/zenodo.6940313},
}

GitHub Events

Total
  • Issues event: 23
  • Delete event: 12
  • Member event: 1
  • Issue comment event: 120
  • Push event: 63
  • Pull request event: 50
  • Pull request review comment event: 148
  • Pull request review event: 117
  • Create event: 15
Last Year
  • Issues event: 23
  • Delete event: 12
  • Member event: 1
  • Issue comment event: 120
  • Push event: 63
  • Pull request event: 50
  • Pull request review comment event: 148
  • Pull request review event: 117
  • Create event: 15

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 25
  • Total pull requests: 98
  • Average time to close issues: 3 months
  • Average time to close pull requests: about 2 months
  • Total issue authors: 5
  • Total pull request authors: 9
  • Average comments per issue: 0.92
  • Average comments per pull request: 3.63
  • Merged pull requests: 70
  • Bot issues: 0
  • Bot pull requests: 17
Past Year
  • Issues: 16
  • Pull requests: 58
  • Average time to close issues: about 2 months
  • Average time to close pull requests: 28 days
  • Issue authors: 3
  • Pull request authors: 6
  • Average comments per issue: 0.63
  • Average comments per pull request: 3.76
  • Merged pull requests: 46
  • Bot issues: 0
  • Bot pull requests: 12
Top Authors
Issue Authors
  • dpo (9)
  • MaxenceGollier (7)
  • MohamedLaghdafHABIBOULLAH (5)
  • geoffroyleconte (2)
  • AHsu98 (1)
Pull Request Authors
  • dpo (33)
  • MaxenceGollier (29)
  • github-actions[bot] (21)
  • MohamedLaghdafHABIBOULLAH (15)
  • geoffroyleconte (12)
  • rjbaraldi (4)
  • nathanemac (4)
  • aldma (2)
  • AHsu98 (1)
Top Labels
Issue Labels
performance (2)
Pull Request Labels
formatting (21) automated pr (21) no changelog (21) bug (2) do not merge (2) enhancement (1)