cooper-optim
A general-purpose, deep learning-first library for constrained optimization in PyTorch
Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org, ieee.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (12.4%) to scientific vocabulary
Keywords
Repository
A general-purpose, deep learning-first library for constrained optimization in PyTorch
Basic Info
- Host: GitHub
- Owner: cooper-org
- License: mit
- Language: Python
- Default Branch: main
- Homepage: https://cooper.readthedocs.io/
- Size: 1.53 MB
Statistics
- Stars: 137
- Watchers: 1
- Forks: 12
- Open Issues: 7
- Releases: 2
Topics
Metadata Files
README.md
Cooper
What is Cooper?
Cooper is a library for solving constrained optimization problems in PyTorch.
Cooper implements several Lagrangian-based (first-order) update schemes that are applicable to a wide range of continuous constrained optimization problems. Cooper is mainly targeted for deep learning applications, where gradients are estimated based on mini-batches, but it is also suitable for general continuous constrained optimization tasks.
There exist other libraries for constrained optimization in PyTorch, like CHOP and GeoTorch, but they rely on assumptions about the constraints (such as admitting efficient projection or proximal operators). These assumptions are often not met in modern machine learning problems. Cooper can be applied to a wider range of constrained optimization problems (including non-convex problems) thanks to its Lagrangian-based approach.
You can check out Cooper's FAQ here.
Cooper's companion paper is available here.
Installation
To install the latest release of Cooper, use the following command:
bash
pip install cooper-optim
To install the latest development version, use the following command instead:
bash
pip install git+https://github.com/cooper-org/cooper@main
Getting Started
Quick Start
To use Cooper, you need to:
- Implement a
ConstrainedMinimizationProblem(CMP) class and its associatedConstrainedMinimizationProblem.compute_cmp_statemethod. This method computes the value of the objective function and constraint violations, and packages them in aCMPStateobject. - The initialization of the
CMPmust create aConstraintobject for each constraint. It is necessary to specify a formulation type (e.g.Lagrangian). Finally, if the chosen formulation requires it, each constraint needs an associatedMultiplierobject corresponding to the Lagrange multiplier for that constraint. - Create a
torch.optim.Optimizerfor the primal variables and atorch.optim.Optimizer(maximize=True)for the dual variables (i.e. the multipliers). Then, wrap these two optimizers in acooper.optim.CooperOptimizer(such asSimultaneousOptimizerfor executing simultaneous primal-dual updates). - You are now ready to perform updates on the primal and dual parameters using the
CooperOptimizer.roll()method. This method triggers the following calls:zero_grad()on both optimizers,compute_cmp_state()on theCMP,- compute the Lagrangian based on the latest
CMPState, backward()on the Lagrangian,step()on both optimizers.
- To access the value of the loss, constraint violations, and Lagrangian terms, you can inspect the returned
RollOutobject from the call toroll().
Example
This is an abstract example on how to solve a constrained optimization problem with Cooper. You can find runnable notebooks with concrete examples in our Tutorials.
```python import cooper import torch
Set up GPU acceleration
DEVICE = ...
class MyCMP(cooper.ConstrainedMinimizationProblem): def init(self): super().init() multiplier = cooper.multipliers.DenseMultiplier(numconstraints=..., device=DEVICE) # By default, constraints are built using `formulationtype=cooper.formulations.Lagrangian` self.constraint = cooper.Constraint( multiplier=multiplier, constraint_type=cooper.ConstraintType.INEQUALITY )
def compute_cmp_state(self, model, inputs, targets):
inputs, targets = inputs.to(DEVICE), targets.to(DEVICE)
loss = ...
constraint_state = cooper.ConstraintState(violation=...)
observed_constraints = {self.constraint: constraint_state}
return cooper.CMPState(loss=loss, observed_constraints=observed_constraints)
train_loader = ... model = (...).to(DEVICE) cmp = MyCMP()
primal_optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
Must set maximize=True since the Lagrange multipliers solve a maximization problem
dualoptimizer = torch.optim.SGD(cmp.dualparameters(), lr=1e-2, maximize=True)
cooperoptimizer = cooper.optim.SimultaneousOptimizer( cmp=cmp, primaloptimizers=primaloptimizer, dualoptimizers=dual_optimizer )
for epochnum in range(NUMEPOCHS):
for inputs, targets in trainloader:
# roll is a convenience function that packages together the evaluation
# of the loss, call for gradient computation, the primal and dual updates and zerograd
computecmpstatekwargs = {"model": model, "inputs": inputs, "targets": targets}
rollout = cooperoptimizer.roll(computecmpstatekwargs=computecmpstatekwargs)
# `rolloutis a namedtuple containing the loss, last CMPState, and the primal
# and dual Lagrangian stores, useful for inspection and logging
``
Contributions
We appreciate all contributions. Please let us know if you encounter a bug by filing an issue.
If you plan to contribute new features, utility functions, or extensions, please first open an issue and discuss the feature with us. To learn more about making a contribution to Cooper, please see our Contribution page.
Papers Using Cooper
Cooper has enabled several papers published at top machine learning conferences: Gallego-Posada et al. (2022); Lachapelle and Lacoste-Julien (2022); Ramirez and Gallego-Posada (2022); Zhu et al. (2023); Hashemizadeh et al. (2024); Sohrabi et al. (2024); Lachapelle et al. (2024); Jang et al. (2024); Navarin et al. (2024); Chung et al. (2024).
Acknowledgements
We thank Manuel Del Verme, Daniel Otero, and Isabel Urrego for useful discussions during the early stages of Cooper.
Many Cooper features arose during the development of several research papers. We would like to thank our co-authors Yoshua Bengio, Juan Elenter, Akram Erraqabi, Golnoosh Farnadi, Ignacio Hounie, Alejandro Ribeiro, Rohan Sukumaran, Motahareh Sohrabi and Tianyue (Helen) Zhang.
License
Cooper is distributed under an MIT license, as found in the LICENSE file.
How to cite Cooper
To cite Cooper, please cite this paper:
bibtex
@article{gallegoPosada2025cooper,
author={Gallego-Posada, Jose and Ramirez, Juan and Hashemizadeh, Meraj and Lacoste-Julien, Simon},
title={{Cooper: A Library for Constrained Optimization in Deep Learning}},
journal={arXiv preprint arXiv:2504.01212},
year={2025}
}
Owner
- Name: Cooper
- Login: cooper-org
- Kind: organization
- Repositories: 1
- Profile: https://github.com/cooper-org
Citation (CITATION.cff)
cff-version: 1.2.0
title: 'Cooper: A Library for Constrained Optimization in Deep Learning'
message: If you use this software, please consider citing it as indicated below.
type: software
authors:
- family-names: Gallego-Posada
given-names: Jose
- family-names: Ramirez
given-names: Juan
- family-names: Hashemizadeh
given-names: Meraj
- family-names: Lacoste-Julien
given-names: Simon
identifiers:
- type: other
value: 'arXiv:2504.01212'
description: The ArXiv preprint of the paper
repository-code: 'https://github.com/cooper-org/cooper'
url: 'https://github.com/cooper-org/cooper'
abstract: "Cooper is an open-source package for solving constrained optimization problems
involving deep learning models. Cooper implements several Lagrangian-based first-order
update schemes, making it easy to combine constrained optimization algorithms with high-
level features of PyTorch such as automatic differentiation, and specialized deep learning
architectures and optimizers. Although Cooper is specifically designed for deep learning
applications where gradients are estimated based on mini-batches, it is suitable for general
non-convex continuous constrained optimization. Cooper's source code is available at
https://github.com/cooper-org/cooper."
keywords:
- non-convex constrained optimization
- lagrangian optimization
- pytorch
- machine learning
license: MIT
version: 1.0.0
date-released: '2025-04-01'
preferred-citation:
type: article
title: 'Cooper: A Library for Constrained Optimization in Deep Learning'
authors:
- family-names: Gallego-Posada
given-names: Jose
- family-names: Ramirez
given-names: Juan
- family-names: Hashemizadeh
given-names: Meraj
- family-names: Lacoste-Julien
given-names: Simon
journal: "arXiv preprint arXiv:2504.01212"
year: 2025
GitHub Events
Total
- Create event: 5
- Release event: 1
- Issues event: 11
- Watch event: 21
- Delete event: 8
- Issue comment event: 12
- Push event: 127
- Pull request event: 8
Last Year
- Create event: 5
- Release event: 1
- Issues event: 11
- Watch event: 21
- Delete event: 8
- Issue comment event: 12
- Push event: 127
- Pull request event: 8
Packages
- Total packages: 1
-
Total downloads:
- pypi 73 last-month
- Total dependent packages: 0
- Total dependent repositories: 0
- Total versions: 2
- Total maintainers: 3
pypi.org: cooper-optim
A library for Lagrangian-based constrained optimization in PyTorch
- Homepage: https://github.com/cooper-org/cooper
- Documentation: https://cooper.readthedocs.io
- License: mit
-
Latest release: 1.0.1
published 11 months ago
Rankings
Maintainers (3)
Dependencies
- actions/checkout v2 composite
- actions/setup-python v2 composite
- actions/checkout v2 composite
- actions/setup-python v2 composite