theseus-ai

A library for differentiable nonlinear optimization

https://github.com/facebookresearch/theseus

Science Score: 64.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Committers with academic emails
    1 of 27 committers (3.7%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.3%) to scientific vocabulary

Keywords

bilevel-optimization computer-vision deep-learning differentiable-optimization embodied-ai gauss-newton implicit-differentiation levenberg-marquardt nonlinear-least-squares pytorch robotics
Last synced: 4 months ago · JSON representation ·

Repository

A library for differentiable nonlinear optimization

Basic Info
  • Host: GitHub
  • Owner: facebookresearch
  • License: mit
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 11.3 MB
Statistics
  • Stars: 1,935
  • Watchers: 34
  • Forks: 132
  • Open Issues: 80
  • Releases: 10
Topics
bilevel-optimization computer-vision deep-learning differentiable-optimization embodied-ai gauss-newton implicit-differentiation levenberg-marquardt nonlinear-least-squares pytorch robotics
Created about 4 years ago · Last pushed 12 months ago
Metadata Files
Readme Contributing License Code of conduct Citation

README.md

CircleCI License pypi PyPi Downloads Python pre-commit black PRs

A library for differentiable nonlinear optimization

PaperVideoTwitterWebpageTutorials

Theseus is an efficient application-agnostic library for building custom nonlinear optimization layers in PyTorch to support constructing various problems in robotics and vision as end-to-end differentiable architectures.

Differentiable nonlinear optimization provides a general scheme to encode inductive priors, as the objective function can be partly parameterized by neural models and partly with expert domain-specific differentiable models. The ability to compute gradients end-to-end is retained by differentiating through the optimizer which allows neural models to train on the final task loss, while also taking advantage of priors captured by the optimizer.

See list of papers published using Theseus for examples across various application domains.


Current Features

Application agnostic interface

Our implementation provides an easy to use interface to build custom optimization layers and plug them into any neural architecture. Following differentiable features are currently available: - Second-order nonlinear optimizers - Gauss-Newton (GN), Levenberg–Marquardt (LM), Trust Region, Dogleg - Other nonlinear optimizers - Cross Entropy Method (CEM) - Linear solvers - Dense: Cholesky, LU; Sparse: CHOLMOD, LU (GPU-only), BaSpaCho - Commonly used costs, AutoDiffCostFunction, RobustCostFunction - Lie groups based on torchlie - Robot kinematics based on torchkin

Efficiency based design

We support several features that improve computation times and memory consumption: - Sparse linear solvers - Batching and GPU acceleration - Automatic vectorization - Backward modes - Implicit, Truncated, Direct Loss Minimization (DLM), Sampling (LEO)

Getting Started

Prerequisites

  • We strongly recommend you install Theseus in a venv or conda environment with Python 3.8-3.10.
  • Theseus requires torch installation. To install for your particular CPU/CUDA configuration, follow the instructions in the PyTorch website.
  • For GPU support, Theseus requires nvcc to compile custom CUDA operations. Make sure it matches the version used to compile pytorch with nvcc --version. If not, install it and ensure its location is on your system's $PATH variable.
  • Theseus also requires suitesparse, which you can install via:
    • sudo apt-get install libsuitesparse-dev (Ubuntu).
    • conda install -c conda-forge suitesparse (Mac).

Installing

  • pypi bash pip install theseus-ai We currently provide wheels with our CUDA extensions compiled using CUDA 11.6 and Python 3.10. For other CUDA versions, consider installing from source or using our build script.

    Note that pypi installation doesn't include our experimental Theseus Labs. For this, please install from source.

  • From source

    The simplest way to install Theseus from source is by running the following (see further below to also include BaSpaCho) bash git clone https://github.com/facebookresearch/theseus.git && cd theseus pip install -e . If you are interested in contributing to Theseus, instead install bash pip install -e ".[dev]" pre-commit install and follow the more detailed instructions in CONTRIBUTING.

  • Installing BaSpaCho extensions from source

    By default, installing from source doesn't include our BaSpaCho sparse solver extension. For this, follow these steps:

1. Compile BaSpaCho from source following instructions [here](https://github.com/facebookresearch/baspacho). We recommend using flags `-DBLA_STATIC=ON -DBUILD_SHARED_LIBS=OFF`.
2. Run

    ```bash
    git clone https://github.com/facebookresearch/theseus.git && cd theseus
    BASPACHO_ROOT_DIR=<path/to/root/baspacho/dir> pip install -e .
    ```

    where the BaSpaCho root dir must have the binaries in the subdirectory `build`.

Running unit tests (requires dev installation)

bash python -m pytest tests By default, unit tests include tests for our CUDA extensions. You can add the option -m "not cudaext" to skip them when installing without CUDA support. Additionally, the tests for sparse solver BaSpaCho are automatically skipped when its extlib is not compiled.

Examples

Simple example. This example is fitting the curve $y$ to a dataset of $N$ observations $(x,y) \sim D$. This is modeled as an Objective with a single CostFunction that computes the residual $y - v e^x$. The Objective and the GaussNewton optimizer are encapsulated into a TheseusLayer. With Adam and MSE loss, $x$ is learned by differentiating through the TheseusLayer.

```python import torch import theseus as th

xtrue, ytrue, vtrue = readdata() # shapes (1, N), (1, N), (1, 1) x = th.Variable(torch.randnlike(xtrue), name="x") y = th.Variable(ytrue, name="y") v = th.Vector(1, name="v") # a manifold subclass of Variable for optimvars

def errorfn(optimvars, auxvars): # returns y - v * exp(x) x, y = auxvars return y.tensor - optim_vars[0].tensor * torch.exp(x.tensor)

objective = th.Objective() costfunction = th.AutoDiffCostFunction( [v], errorfn, ytrue.shape[1], auxvars=[x, y], costweight=th.ScaleCostWeight(1.0)) objective.add(costfunction) layer = th.TheseusLayer(th.GaussNewton(objective, max_iterations=10))

phi = torch.nn.Parameter(xtrue + 0.1 * torch.oneslike(xtrue)) outeroptimizer = torch.optim.Adam([phi], lr=0.001) for epoch in range(10): solution, info = layer.forward( inputtensors={"x": phi.clone(), "v": torch.ones(1, 1)}, optimizerkwargs={"backwardmode": "implicit"}) outerloss = torch.nn.functional.mseloss(solution["v"], vtrue) outerloss.backward() outeroptimizer.step() ```

See tutorials, and robotics and vision examples to learn about the API and usage.

Citing Theseus

If you use Theseus in your work, please cite the paper with the BibTeX below.

bibtex @article{pineda2022theseus, title = {{Theseus: A Library for Differentiable Nonlinear Optimization}}, author = {Luis Pineda and Taosha Fan and Maurizio Monge and Shobha Venkataraman and Paloma Sodhi and Ricky TQ Chen and Joseph Ortiz and Daniel DeTone and Austin Wang and Stuart Anderson and Jing Dong and Brandon Amos and Mustafa Mukadam}, journal = {Advances in Neural Information Processing Systems}, year = {2022} }

License

Theseus is MIT licensed. See the LICENSE for details.

Additional Information

Theseus is made possible by the following contributors:

Made with contrib.rocks.

Owner

  • Name: Meta Research
  • Login: facebookresearch
  • Kind: organization
  • Location: Menlo Park, California

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
  - family-names: "Pineda"
    given-names: "Luis"
  - family-names: "Fan"
    given-names: "Taosha"
  - family-names: "Monge"
    given-names: "Maurizio"
  - family-names: "Venkataraman"
    given-names: "Shobha"
  - family-names: "Sodhi"
    given-names: "Paloma"
  - family-names: "Chen"
    given-names: "Ricky T. Q."
  - family-names: "Ortiz"
    given-names: "Joseph"
  - family-names: "DeTone"
    given-names: "Daniel"
  - family-names: "Wang"
    given-names: "Austin"
  - family-names: "Anderson"
    given-names: "Stuart"
  - family-names: "Dong"
    given-names: "Jing"
  - family-names: "Amos"
    given-names: "Brandon"
  - family-names: "Mukadam"
    given-names: "Mustafa"
title: "Theseus: A Library for Differentiable Nonlinear Optimization"
url: "https://github.com/facebookresearch/theseus"
preferred-citation:
  type: article
  journal: Advances in Neural Information Processing Systems
  title: "Theseus: A Library for Differentiable Nonlinear Optimization"
  url: "https://arxiv.org/abs/2207.09442"
  year: 2022
  authors:
  - family-names: "Pineda"
    given-names: "Luis"
  - family-names: "Fan"
    given-names: "Taosha"
  - family-names: "Monge"
    given-names: "Maurizio"
  - family-names: "Venkataraman"
    given-names: "Shobha"
  - family-names: "Sodhi"
    given-names: "Paloma"
  - family-names: "Chen"
    given-names: "Ricky T. Q."
  - family-names: "Ortiz"
    given-names: "Joseph"
  - family-names: "DeTone"
    given-names: "Daniel"
  - family-names: "Wang"
    given-names: "Austin"
  - family-names: "Anderson"
    given-names: "Stuart"
  - family-names: "Dong"
    given-names: "Jing"
  - family-names: "Amos"
    given-names: "Brandon"
  - family-names: "Mukadam"
    given-names: "Mustafa"

GitHub Events

Total
  • Issues event: 4
  • Watch event: 163
  • Delete event: 2
  • Issue comment event: 9
  • Push event: 38
  • Pull request review event: 4
  • Pull request review comment event: 1
  • Pull request event: 8
  • Fork event: 10
  • Create event: 2
Last Year
  • Issues event: 4
  • Watch event: 163
  • Delete event: 2
  • Issue comment event: 9
  • Push event: 38
  • Pull request review event: 4
  • Pull request review comment event: 1
  • Pull request event: 8
  • Fork event: 10
  • Create event: 2

Committers

Last synced: 8 months ago

All Time
  • Total Commits: 357
  • Total Committers: 27
  • Avg Commits per committer: 13.222
  • Development Distribution Score (DDS): 0.353
Past Year
  • Commits: 9
  • Committers: 4
  • Avg Commits per committer: 2.25
  • Development Distribution Score (DDS): 0.444
Top Committers
Name Email Commits
Luis Pineda l****p@g****m 231
Taosha Fan 6****a 66
Mustafa Mukadam m****h@g****m 19
Maurizio Monge m****e@g****m 7
Brandon Amos b****s@g****m 4
Jeffin Francis f****7@g****m 4
Joe Ortiz j****6@g****m 3
Austin Wang a****g@g****m 2
Dishank Bansal d****b 2
Riku Murai r****0@g****m 2
Luiz Gustavo Hafemann l****h@m****g 1
Neil Pandya n****l@n****m 1
Aidan Dunlap 1****n 1
Brent Yi y****h@g****m 1
Chris Paxton c****n@j****u 1
Christopher6488 4****8 1
Daniel DeTone 4****e 1
Gralerfics 5****s 1
Hesam-Vayu 1****u 1
Jacob Lubecki j****i@g****m 1
Jingyu_Qian 1****n 1
Johannes Schönberger j****h@d****e 1
Paloma Sodhi p****i@g****m 1
Ricky Chen r****n@g****m 1
Thomas Weng t****1@g****m 1
Yipu Zhao z****u@g****m 1
vshobha 5****a 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 4 months ago

All Time
  • Total issues: 95
  • Total pull requests: 140
  • Average time to close issues: 22 days
  • Average time to close pull requests: 15 days
  • Total issue authors: 57
  • Total pull request authors: 20
  • Average comments per issue: 3.14
  • Average comments per pull request: 0.52
  • Merged pull requests: 125
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 8
  • Pull requests: 10
  • Average time to close issues: 4 days
  • Average time to close pull requests: 23 days
  • Issue authors: 8
  • Pull request authors: 5
  • Average comments per issue: 1.0
  • Average comments per pull request: 0.7
  • Merged pull requests: 8
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • luisenp (12)
  • fantaosha (8)
  • EXing (4)
  • xphsean12 (4)
  • mhmukadam (3)
  • EmeryLee97 (3)
  • Jeff09 (3)
  • shengtsui (2)
  • WeiXiCZ (2)
  • zhangrentu (2)
  • tvercaut (2)
  • lpanaf (2)
  • JingyuQian (2)
  • Muon2 (2)
  • neanea04 (2)
Pull Request Authors
  • luisenp (89)
  • fantaosha (25)
  • mhmukadam (5)
  • bamos (3)
  • dishank-b (2)
  • ahojnnes (2)
  • xphsean12 (2)
  • lpanaf (2)
  • kumar-selvakumaran (2)
  • jacoblubecki (2)
  • rmurai0610 (2)
  • a3ahmad (1)
  • JingyuQian (1)
  • exhaustin (1)
  • suddhu (1)
Top Labels
Issue Labels
enhancement (6) refactor (4) good first issue (3) performance (2) bug (1) high priority (1)
Pull Request Labels
CLA Signed (141) enhancement (19) bug (8) documentation (4) refactor (2)

Packages

  • Total packages: 4
  • Total downloads:
    • pypi 14,624 last-month
  • Total dependent packages: 3
    (may contain duplicates)
  • Total dependent repositories: 2
    (may contain duplicates)
  • Total versions: 34
  • Total maintainers: 1
pypi.org: theseus-ai

A library for differentiable nonlinear optimization.

  • Versions: 13
  • Dependent Packages: 0
  • Dependent Repositories: 2
  • Downloads: 6,925 Last month
  • Docker Downloads: 0
Rankings
Stargazers count: 1.8%
Docker downloads count: 4.3%
Forks count: 4.4%
Average: 6.6%
Downloads: 7.8%
Dependent packages count: 10.1%
Dependent repos count: 11.5%
Maintainers (1)
Last synced: 4 months ago
pypi.org: torchlie

Torch extension for differentiable Lie groups.

  • Versions: 10
  • Dependent Packages: 2
  • Dependent Repositories: 0
  • Downloads: 4,231 Last month
Rankings
Stargazers count: 1.8%
Forks count: 4.8%
Dependent packages count: 7.1%
Average: 11.0%
Dependent repos count: 30.3%
Maintainers (1)
Last synced: 4 months ago
pypi.org: theseus-ai-nightly

A library for differentiable nonlinear optimization.

  • Versions: 8
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 13 Last month
Rankings
Stargazers count: 1.8%
Forks count: 4.8%
Dependent packages count: 6.6%
Average: 11.5%
Downloads: 13.9%
Dependent repos count: 30.6%
Maintainers (1)
Last synced: 4 months ago
pypi.org: torchkin

Torch extension for differentiable kinematics.

  • Versions: 3
  • Dependent Packages: 1
  • Dependent Repositories: 0
  • Downloads: 3,455 Last month
Rankings
Stargazers count: 1.8%
Forks count: 4.7%
Dependent packages count: 7.2%
Average: 13.8%
Dependent repos count: 41.3%
Maintainers (1)
Last synced: 4 months ago

Dependencies

requirements/dev.txt pypi
  • Sphinx ==5.0.2 development
  • black >=20.8b1 development
  • differentiable-robot-model >=0.2.3 development
  • flake8 >=3.8.4 development
  • isort >=5.6.4 development
  • mock >=4.0.3 development
  • mypy >=0.981 development
  • pre-commit >=2.9.2 development
  • sphinx-rtd-theme ==1.0.0 development
  • types-PyYAML ==5.4.3 development
  • types-mock >=4.0.8 development
requirements/docs.txt pypi
  • Sphinx ==5.0.2
  • differentiable-robot-model >=0.2.3
  • mock >=4.0.3
  • numpy >=1.19.2
  • pybind11 >=2.7.1
  • pytest >=6.2.1
  • scikit-sparse >=0.4.5
  • scipy >=1.5.3
  • sphinx-rtd-theme ==1.0.0
  • torch >=1.11
requirements/main.txt pypi
  • functorch ==0.2.1
  • numpy >=1.19.2
  • pybind11 >=2.7.1
  • pytest >=6.2.1
  • scikit-sparse >=0.4.5
  • scipy >=1.5.3