torchopt

TorchOpt is an efficient library for differentiable optimization built upon PyTorch.

https://github.com/metaopt/torchopt

Science Score: 64.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Committers with academic emails
    1 of 10 committers (10.0%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (8.8%) to scientific vocabulary

Keywords

automatic-differentiation bilevel-optimization deep-learning differentiable-optimization differentiable-programming functional-programming implicit-differentiation meta-learning meta-reinforcement-learning meta-rl optimization optimizer pytorch

Keywords from Contributors

transformers mesh cryptocurrencies spacy-extension exoplanet energy-system jax hydrology interactome data-profilers
Last synced: 4 months ago · JSON representation ·

Repository

TorchOpt is an efficient library for differentiable optimization built upon PyTorch.

Basic Info
Statistics
  • Stars: 612
  • Watchers: 13
  • Forks: 39
  • Open Issues: 19
  • Releases: 9
Topics
automatic-differentiation bilevel-optimization deep-learning differentiable-optimization differentiable-programming functional-programming implicit-differentiation meta-learning meta-reinforcement-learning meta-rl optimization optimizer pytorch
Created over 3 years ago · Last pushed 4 months ago
Metadata Files
Readme Changelog Contributing License Code of conduct Citation

README.md

![Python 3.8+](https://img.shields.io/badge/Python-3.8%2B-brightgreen.svg) ![PyPI](https://img.shields.io/pypi/v/torchopt?logo=pypi) ![GitHub Workflow Status](https://img.shields.io/github/actions/workflow/status/metaopt/torchopt/tests.yml?label=tests&logo=github) ![CodeCov](https://img.shields.io/codecov/c/github/metaopt/torchopt/main?logo=codecov) ![Documentation Status](https://img.shields.io/readthedocs/torchopt?logo=readthedocs) ![Downloads](https://static.pepy.tech/personalized-badge/torchopt?period=total&left_color=grey&right_color=blue&left_text=downloads) ![License](https://img.shields.io/github/license/metaopt/torchopt?label=license&logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCAyNCAyNCIgd2lkdGg9IjI0IiBoZWlnaHQ9IjI0IiBmaWxsPSIjZmZmZmZmIj48cGF0aCBmaWxsLXJ1bGU9ImV2ZW5vZGQiIGQ9Ik0xMi43NSAyLjc1YS43NS43NSAwIDAwLTEuNSAwVjQuNUg5LjI3NmExLjc1IDEuNzUgMCAwMC0uOTg1LjMwM0w2LjU5NiA1Ljk1N0EuMjUuMjUgMCAwMTYuNDU1IDZIMi4zNTNhLjc1Ljc1IDAgMTAwIDEuNUgzLjkzTC41NjMgMTUuMThhLjc2Mi43NjIgMCAwMC4yMS44OGMuMDguMDY0LjE2MS4xMjUuMzA5LjIyMS4xODYuMTIxLjQ1Mi4yNzguNzkyLjQzMy42OC4zMTEgMS42NjIuNjIgMi44NzYuNjJhNi45MTkgNi45MTkgMCAwMDIuODc2LS42MmMuMzQtLjE1NS42MDYtLjMxMi43OTItLjQzMy4xNS0uMDk3LjIzLS4xNTguMzEtLjIyM2EuNzUuNzUgMCAwMC4yMDktLjg3OEw1LjU2OSA3LjVoLjg4NmMuMzUxIDAgLjY5NC0uMTA2Ljk4NC0uMzAzbDEuNjk2LTEuMTU0QS4yNS4yNSAwIDAxOS4yNzUgNmgxLjk3NXYxNC41SDYuNzYzYS43NS43NSAwIDAwMCAxLjVoMTAuNDc0YS43NS43NSAwIDAwMC0xLjVIMTIuNzVWNmgxLjk3NGMuMDUgMCAuMS4wMTUuMTQuMDQzbDEuNjk3IDEuMTU0Yy4yOS4xOTcuNjMzLjMwMy45ODQuMzAzaC44ODZsLTMuMzY4IDcuNjhhLjc1Ljc1IDAgMDAuMjMuODk2Yy4wMTIuMDA5IDAgMCAuMDAyIDBhMy4xNTQgMy4xNTQgMCAwMC4zMS4yMDZjLjE4NS4xMTIuNDUuMjU2Ljc5LjRhNy4zNDMgNy4zNDMgMCAwMDIuODU1LjU2OCA3LjM0MyA3LjM0MyAwIDAwMi44NTYtLjU2OWMuMzM4LS4xNDMuNjA0LS4yODcuNzktLjM5OWEzLjUgMy41IDAgMDAuMzEtLjIwNi43NS43NSAwIDAwLjIzLS44OTZMMjAuMDcgNy41aDEuNTc4YS43NS43NSAwIDAwMC0xLjVoLTQuMTAyYS4yNS4yNSAwIDAxLS4xNC0uMDQzbC0xLjY5Ny0xLjE1NGExLjc1IDEuNzUgMCAwMC0uOTg0LS4zMDNIMTIuNzVWMi43NXpNMi4xOTMgMTUuMTk4YTUuNDE4IDUuNDE4IDAgMDAyLjU1Ny42MzUgNS40MTggNS40MTggMCAwMDIuNTU3LS42MzVMNC43NSA5LjM2OGwtMi41NTcgNS44M3ptMTQuNTEtLjAyNGMuMDgyLjA0LjE3NC4wODMuMjc1LjEyNi41My4yMjMgMS4zMDUuNDUgMi4yNzIuNDVhNS44NDYgNS44NDYgMCAwMDIuNTQ3LS41NzZMMTkuMjUgOS4zNjdsLTIuNTQ3IDUuODA3eiI+PC9wYXRoPjwvc3ZnPgo=)

Installation | Documentation | Tutorials | Examples | Paper | Citation

TorchOpt is an efficient library for differentiable optimization built upon PyTorch. TorchOpt is:

  • Comprehensive: TorchOpt provides three differentiation modes - explicit differentiation, implicit differentiation, and zero-order differentiation for handling different differentiable optimization situations.
  • Flexible: TorchOpt provides both functional and objective-oriented API for users' different preferences. Users can implement differentiable optimization in JAX-like or PyTorch-like style.
  • Efficient: TorchOpt provides (1) CPU/GPU acceleration differentiable optimizer (2) RPC-based distributed training framework (3) Fast Tree Operations, to largely increase the training efficiency for bi-level optimization problems.

Beyond differentiable optimization, TorchOpt can also be regarded as a functional optimizer that enables JAX-like composable functional optimizer for PyTorch. With TorchOpt, users can easily conduct neural network optimization in PyTorch with a functional style optimizer, similar to Optax in JAX.


The README is organized as follows:


TorchOpt as Functional Optimizer

The design of TorchOpt follows the philosophy of functional programming. Aligned with functorch, users can conduct functional style programming with models, optimizers and training in PyTorch. We use the Adam optimizer as an example in the following illustration. You can also check out the tutorial notebook Functional Optimizer for more details.

Optax-Like API

For those users who prefer fully functional programming, we offer Optax-Like API by passing gradients and optimizer states to the optimizer function. Here is an example coupled with functorch:

```python class Net(nn.Module): ...

class Loader(DataLoader): ...

net = Net() # init loader = Loader() optimizer = torchopt.adam()

model, params = functorch.makefunctional(net) # use functorch extract network parameters optstate = optimizer.init(params) # init optimizer

xs, ys = next(loader) # get data pred = model(params, xs) # forward loss = F.cross_entropy(pred, ys) # compute loss

grads = torch.autograd.grad(loss, params) # compute gradients updates, optstate = optimizer.update(grads, optstate) # get updates params = torchopt.apply_updates(params, updates) # update network parameters ```

We also provide a wrapper torchopt.FuncOptimizer to make maintaining the optimizer state easier:

``python net = Net() # init loader = Loader() optimizer = torchopt.FuncOptimizer(torchopt.adam()) # wrap withtorchopt.FuncOptimizer`

model, params = functorch.make_functional(net) # use functorch extract network parameters

for xs, ys in loader: # get data pred = model(params, xs) # forward loss = F.cross_entropy(pred, ys) # compute loss

params = optimizer.step(loss, params)                # update network parameters

```

PyTorch-Like API

We also design a base class torchopt.Optimizer that has the same interface as torch.optim.Optimizer. We offer origin PyTorch APIs (e.g. zero_grad() or step()) by wrapping our Optax-Like API for traditional PyTorch users.

```python net = Net() # init loader = Loader() optimizer = torchopt.Adam(net.parameters())

xs, ys = next(loader) # get data pred = net(xs) # forward loss = F.cross_entropy(pred, ys) # compute loss

optimizer.zero_grad() # zero gradients loss.backward() # backward optimizer.step() # step updates ```

Differentiable

On top of the same optimization function as torch.optim, an important benefit of the functional optimizer is that one can implement differentiable optimization easily. This is particularly helpful when the algorithm requires differentiation through optimization updates (such as meta-learning practices). We take as the inputs the gradients and optimizer states, and use non-in-place operators to compute and output the updates. The processes can be automatically implemented, with the only need from users being to pass the argument inplace=False to the functions. Check out the section Explicit Gradient (EG) functional API for example.


TorchOpt for Differentiable Optimization

We design a bilevel-optimization updating scheme, which can be easily extended to realize various differentiable optimization processes.

As shown above, the scheme contains an outer level that has parameters $\phi$ that can be learned end-to-end through the inner level parameters solution $\theta^{\prime}(\phi)$ by using the best-response derivatives $\partial \theta^{\prime}(\phi) / \partial \phi$. TorchOpt supports three differentiation modes. It can be seen that the key component of this algorithm is to calculate the best-response (BR) Jacobian. From the BR-based perspective, existing gradient methods can be categorized into three groups: explicit gradient over unrolled optimization, implicit differentiation, and zero-order gradient differentiation.

Explicit Gradient (EG)

The idea of the explicit gradient is to treat the gradient step as a differentiable function and try to backpropagate through the unrolled optimization path. This differentiation mode is suitable for algorithms when the inner-level optimization solution is obtained by a few gradient steps, such as MAML and MGRL. TorchOpt offers both functional and object-oriented API for EG to fit different user applications.

Functional API <!-- omit in toc -->

The functional API is to conduct optimization in a functional programming style. Note that we pass the argument inplace=False to the functions to make the optimization differentiable. Refer to the tutorial notebook Functional Optimizer for more guidance.

```python

Define functional optimizer

optimizer = torchopt.adam()

Define meta and inner parameters

metaparams = ... fmodel, params = makefunctional(model)

Initial state

state = optimizer.init(params)

for iter in range(itertimes): loss = innerloss(fmodel, params, metaparams) grads = torch.autograd.grad(loss, params) # Apply non-inplace parameter update updates, state = optimizer.update(grads, state, inplace=False) params = torchopt.applyupdates(params, updates)

loss = outerloss(fmodel, params, metaparams) metagrads = torch.autograd.grad(loss, metaparams) ```

OOP API <!-- omit in toc -->

TorchOpt also provides OOP API compatible with the PyTorch programming style. Refer to the example and the tutorial notebook Meta-Optimizer, Stop Gradient for more guidance.

```python

Define meta and inner parameters

meta_params = ... model = ...

Define differentiable optimizer

optimizer = torchopt.MetaAdam(model) # a model instance as the argument instead of model.parameters()

for iter in range(itertimes): # Perform inner update loss = innerloss(model, meta_params) optimizer.step(loss)

loss = outerloss(model, metaparams) loss.backward() ```

Implicit Gradient (IG)

By treating the solution $\theta^{\prime}$ as an implicit function of $\phi$, the idea of IG is to directly get analytical best-response derivatives $\partial \theta^{\prime} (\phi) / \partial \phi$ by implicit function theorem. This is suitable for algorithms when the inner-level optimal solution is achieved ${\left. \frac{\partial F (\theta, \phi)}{\partial \theta} \right\rvert}_{\theta=\theta^{\prime}} = 0$ or reaches some stationary conditions $F (\theta^{\prime}, \phi) = 0$, such as iMAML and DEQ. TorchOpt offers both functional and OOP APIs for supporting both conjugate gradient-based and Neumann series-based IG methods. Refer to the example iMAML and the notebook Implicit Gradient for more guidance.

Functional API <!-- omit in toc -->

For the implicit gradient, similar to JAXopt, users need to define the stationary condition and TorchOpt provides the decorator to wrap the solve function for enabling implicit gradient computation.

```python

The stationary condition for the inner-loop

def stationary(params, meta_params, data): # Stationary condition construction return stationary condition

Decorator for wrapping the function

Optionally specify the linear solver (conjugate gradient or Neumann series)

@torchopt.diff.implicit.customroot(stationary, solve=linearsolver) def solve(params, meta_params, data): # Forward optimization process for params return output

Define params, meta_params and get data

params, metaprams, data = ..., ..., ... optimalparams = solve(params, metaparams, data) loss = outerloss(optimal_params)

metagrads = torch.autograd.grad(loss, metaparams) ```

OOP API <!-- omit in toc -->

TorchOpt also offers an OOP API, which users need to inherit from the class torchopt.nn.ImplicitMetaGradientModule to construct the inner-loop network. Users need to define the stationary condition/objective function and the inner-loop solve function to enable implicit gradient computation.

```python

Inherited from the class ImplicitMetaGradientModule

Optionally specify the linear solver (conjugate gradient or Neumann series)

class InnerNet(ImplicitMetaGradientModule, linearsolve=linearsolver): def init(self, metaparam): super().init() self.metaparam = meta_param ...

def forward(self, batch):
    # Forward process
    ...

def optimality(self, batch, labels):
    # Stationary condition construction for calculating implicit gradient
    # NOTE: If this method is not implemented, it will be automatically
    # derived from the gradient of the `objective` function.
    ...

def objective(self, batch, labels):
    # Define the inner-loop optimization objective
    ...

def solve(self, batch, labels):
    # Conduct the inner-loop optimization
    ...

Get meta_params and data

metaparams, data = ..., ... innernet = InnerNet(meta_params)

Solve for inner-loop process related to the meta-parameters

optimalinnernet = inner_net.solve(data)

Get outer loss and solve for meta-gradient

loss = outerloss(optimalinnernet) metagrads = torch.autograd.grad(loss, meta_params) ```

Zero-order Differentiation (ZD)

When the inner-loop process is non-differentiable or one wants to eliminate the heavy computation burdens in the previous two modes (brought by Hessian), one can choose Zero-order Differentiation (ZD). ZD typically gets gradients based on zero-order estimation, such as finite-difference, or Evolutionary Strategy. Instead of optimizing the objective $F$, ES optimizes a smoothed objective. TorchOpt provides both functional and OOP APIs for the ES method. Refer to the tutorial notebook Zero-order Differentiation for more guidance.

Functional API <!-- omit in toc -->

For zero-order differentiation, users need to define the forward pass calculation and the noise sampling procedure. TorchOpt provides the decorator to wrap the forward function for enabling zero-order differentiation.

```python

Customize the noise sampling function in ES

def distribution(sampleshape): # Generate a batch of noise samples # NOTE: The distribution should be spherical symmetric and with a constant variance of 1. ... return noisebatch

Distribution can also be an instance of torch.distributions.Distribution, e.g., torch.distributions.Normal(...)

distribution = torch.distributions.Normal(loc=0, scale=1)

Specify method and hyper-parameter of ES

@torchopt.diff.zero_order(distribution, method) def forward(params, batch, labels): # Forward process ... return objective # the returned tensor should be a scalar tensor ```

OOP API <!-- omit in toc -->

TorchOpt also offers an OOP API, which users need to inherit from the class torchopt.nn.ZeroOrderGradientModule to construct the network as an nn.Module following a classical PyTorch style. Users need to define the forward process zero-order gradient procedures forward() and a noise sampling function sample().

```python

Inherited from the class ZeroOrderGradientModule

Optionally specify the method and/or num_samples and/or sigma used for sampling

class Net(ZeroOrderGradientModule, method=method, numsamples=numsamples, sigma=sigma): def init(self, ...): ...

def forward(self, batch):
    # Forward process
    ...
    return objective  # the returned tensor should be a scalar tensor

def sample(self, sample_shape=torch.Size()):
    # Generate a batch of noise samples
    # NOTE: The distribution should be spherical symmetric and with a constant variance of 1.
    ...
    return noise_batch

Get model and data

net = Net(...) data = ...

Forward pass

loss = Net(data)

Backward pass using zero-order differentiation

grads = torch.autograd.grad(loss, net.parameters()) ```


High-Performance and Distributed Training

CPU/GPU accelerated differentiable optimizer

We take the optimizer as a whole instead of separating it into several basic operators (e.g., sqrt and div). Therefore, by manually writing the forward and backward functions, we can perform the symbolic reduction. In addition, we can store some intermediate data that can be reused during the backpropagation. We write the accelerated functions in C++ OpenMP and CUDA, bind them by pybind11 to allow they can be called by Python, and then define the forward and backward behavior using torch.autograd.Function. Users can use it by simply setting the use_accelerated_op flag as True. Refer to the corresponding sections in the tutorials Functional Optimizer](tutorials/1FunctionalOptimizer.ipynb) and Meta-Optimizer

python optimizer = torchopt.MetaAdam(model, lr, use_accelerated_op=True)

Distributed Training

TorchOpt provides distributed training features based on the PyTorch RPC module for better training speed and multi-node multi-GPU support. Different from the MPI-like parallelization paradigm, which uses multiple homogeneous workers and requires carefully designed communication hooks, the RPC APIs allow users to build their optimization pipeline more flexibly. Experimental results show that we achieve an approximately linear relationship between the speed-up ratio and the number of workers. Check out the Distributed Training Documentation and distributed MAML example for more specific guidance.

OpTree

We implement the PyTree to enable fast nested structure flattening using C++. The tree operations (e.g., flatten and unflatten) are very important in enabling functional and Just-In-Time (JIT) features of deep learning frameworks. By implementing it in C++, we can use some cache/memory-friendly structures (e.g., absl::InlinedVector) to improve the performance. For more guidance and comparison results, please refer to our open-source project OpTree.


Visualization

Complex gradient flow in meta-learning brings in a great challenge for managing the gradient flow and verifying its correctness of it. TorchOpt provides a visualization tool that draws variable (e.g., network parameters or meta-parameters) names on the gradient graph for better analysis. The visualization tool is modified from torchviz. Refer to the example visualization code and the tutorial notebook Visualization for more details.

The figure below shows the visualization result. Compared with torchviz, TorchOpt fuses the operations within the Adam together (orange) to reduce the complexity and provide simpler visualization.


Examples

In the examples directory, we offer several examples of functional optimizers and lightweight meta-learning examples with TorchOpt.

Also, check examples for more distributed/visualization/functorch-compatible examples.


Installation

Requirements

  • PyTorch
  • (Optional) For visualizing computation graphs
    • Graphviz (for Linux users use apt/yum install graphviz or conda install -c anaconda python-graphviz)

Please follow the instructions at https://pytorch.org to install PyTorch in your Python environment first. Then run the following command to install TorchOpt from PyPI (PyPI / Status):

bash pip3 install torchopt

If the minimum version of PyTorch is not satisfied, pip will install/upgrade it for you. Please be careful about the torch build for CPU / CUDA support (e.g. cpu, cu118, cu121). You may need to specify the extra index URL for the torch package:

bash pip3 install torchopt --extra-index-url https://download.pytorch.org/whl/cu121

See https://pytorch.org for more information about installing PyTorch.

You can also build shared libraries from source, use:

bash git clone https://github.com/metaopt/torchopt.git cd torchopt pip3 install .

We provide a conda environment recipe to install the build toolchain such as cmake, g++, and nvcc. You can use the following commands with conda / mamba to create a new isolated environment.

```bash git clone https://github.com/metaopt/torchopt.git cd torchopt

You may need CONDA_OVERRIDE_CUDA if conda fails to detect the NVIDIA driver (e.g. in docker or WSL2)

CONDAOVERRIDECUDA=12.1 conda env create --file conda-recipe-minimal.yaml

conda activate torchopt make install-editable # or run pip3 install --no-build-isolation --editable . ```


Changelog

See CHANGELOG.md.


Citing TorchOpt

If you find TorchOpt useful, please cite it in your publications.

bibtex @article{JMLR:TorchOpt, author = {Jie Ren* and Xidong Feng* and Bo Liu* and Xuehai Pan* and Yao Fu and Luo Mai and Yaodong Yang}, title = {TorchOpt: An Efficient Library for Differentiable Optimization}, journal = {Journal of Machine Learning Research}, year = {2023}, volume = {24}, number = {367}, pages = {1--14}, url = {http://jmlr.org/papers/v24/23-0191.html} }

The Team

TorchOpt is a work by Jie Ren, Xidong Feng, Bo Liu, Xuehai Pan, Luo Mai, and Yaodong Yang.

License

TorchOpt is released under the Apache License, Version 2.0.

Owner

  • Name: MetaOPT Team
  • Login: metaopt
  • Kind: organization

Citation (CITATION.cff)

cff-version: 1.2.0
title: TorchOpt
message: 'If you use this software, please cite it as below.'
type: software
authors:
  - given-names: Jie
    family-names: Ren
    email: jieren9806@gmail.com
    affiliation: University of Edinburgh
  - given-names: Xidong
    family-names: Feng
    email: xidong.feng.20@ucl.ac.uk
    affiliation: University College London
  - given-names: Bo
    family-names: Liu
    email: benjaminliu.eecs@gmail.com
    affiliation: Peking University
    orcid: 'https://orcid.org/0000-0001-5426-515X'
  - given-names: Xuehai
    family-names: Pan
    email: xuehaipan@pku.edu.cn
    affiliation: Peking University
  - given-names: Yao
    family-names: Fu
    email: f.yu@ed.ac.uk
    affiliation: University of Edinburgh
  - given-names: Luo
    family-names: Mai
    email: luo.mai@ed.ac.uk
    affiliation: University of Edinburgh
  - given-names: Yaodong
    family-names: Yang
    affiliation: Peking University
    email: yaodong.yang@pku.edu.cn
version: 0.7.3
date-released: "2023-11-10"
license: Apache-2.0
repository-code: "https://github.com/metaopt/torchopt"

GitHub Events

Total
  • Issues event: 2
  • Watch event: 66
  • Push event: 16
  • Fork event: 3
Last Year
  • Issues event: 2
  • Watch event: 66
  • Push event: 16
  • Fork event: 3

Committers

Last synced: 7 months ago

All Time
  • Total Commits: 253
  • Total Committers: 10
  • Avg Commits per committer: 25.3
  • Development Distribution Score (DDS): 0.462
Past Year
  • Commits: 3
  • Committers: 2
  • Avg Commits per committer: 1.5
  • Development Distribution Score (DDS): 0.333
Top Committers
Name Email Commits
Xuehai Pan X****n@p****n 136
Bo Liu b****s@g****m 44
pre-commit-ci[bot] 6****] 31
Hello_World j****6@g****m 19
dependabot[bot] 4****] 15
Xidong Feng f****h@1****m 3
Vincent Moens v****s@g****m 2
Stefano Woerner s****o@w****u 1
Ikko Eltociear Ashimine e****r@g****m 1
Yao Fu f****0@g****m 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 4 months ago

All Time
  • Total issues: 29
  • Total pull requests: 137
  • Average time to close issues: about 1 month
  • Average time to close pull requests: 12 days
  • Total issue authors: 20
  • Total pull request authors: 10
  • Average comments per issue: 1.14
  • Average comments per pull request: 0.69
  • Merged pull requests: 115
  • Bot issues: 0
  • Bot pull requests: 55
Past Year
  • Issues: 5
  • Pull requests: 1
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 5
  • Pull request authors: 1
  • Average comments per issue: 0.0
  • Average comments per pull request: 0.0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 1
Top Authors
Issue Authors
  • Benjamin-eecs (7)
  • XuehaiPan (3)
  • happpyosu (2)
  • floatingCatty (1)
  • waterhorse1 (1)
  • hccz95 (1)
  • dilithjay (1)
  • ChiahsinChu (1)
  • ycsos (1)
  • vmichals (1)
  • xianghang (1)
  • XindiWu (1)
  • marvinfriede (1)
  • lmz123321 (1)
  • woithook (1)
Pull Request Authors
  • XuehaiPan (45)
  • pre-commit-ci[bot] (34)
  • dependabot[bot] (25)
  • Benjamin-eecs (20)
  • JieRen98 (10)
  • ycsos (2)
  • waterhorse1 (2)
  • vmoens (1)
  • eltociear (1)
  • StefanoWoerner (1)
Top Labels
Issue Labels
bug (12) enhancement (9) question (8) feature (4) cxx / cuda (3) distributed (2) better errors (1) functorch (1) upstream (1) pytorch (1) dependencies (1)
Pull Request Labels
dependencies (56) enhancement (38) feature (15) pytorch (13) bug (10) cxx / cuda (9) documentation (8) example / tutorial (7) functorch (7) upstream (6) distributed (2) better errors (1) jax (1)

Packages

  • Total packages: 2
  • Total downloads:
    • pypi 5,231 last-month
  • Total dependent packages: 4
    (may contain duplicates)
  • Total dependent repositories: 2
    (may contain duplicates)
  • Total versions: 23
  • Total maintainers: 2
pypi.org: torchopt

An efficient library for differentiable optimization for PyTorch.

  • Versions: 14
  • Dependent Packages: 4
  • Dependent Repositories: 2
  • Downloads: 5,231 Last month
Rankings
Stargazers count: 2.9%
Dependent packages count: 3.2%
Average: 6.6%
Forks count: 7.1%
Downloads: 8.2%
Dependent repos count: 11.5%
Maintainers (2)
Last synced: 4 months ago
proxy.golang.org: github.com/metaopt/torchopt
  • Versions: 9
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent packages count: 6.4%
Average: 6.6%
Dependent repos count: 6.8%
Last synced: 4 months ago

Dependencies

.github/workflows/build.yml actions
  • actions/checkout v3 composite
  • actions/download-artifact v3 composite
  • actions/setup-python v4 composite
  • actions/upload-artifact v3 composite
  • pypa/cibuildwheel v2.12.0 composite
  • pypa/gh-action-pypi-publish release/v1 composite
.github/workflows/lint.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
.github/workflows/tests.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
  • codecov/codecov-action v3 composite
Dockerfile docker
  • builder latest build
  • devel-builder latest build
  • nvidia/cuda "${cuda_docker_tag}" build
docs/requirements.txt pypi
  • IPython *
  • docutils *
  • ipykernel *
  • matplotlib *
  • myst-nb *
  • pandoc *
  • sphinx >=5.2.1
  • sphinx-autoapi *
  • sphinx-autobuild *
  • sphinx-autodoc-typehints >=1.19.2
  • sphinx-copybutton *
  • sphinx-rtd-theme *
  • sphinxcontrib-bibtex *
  • sphinxcontrib-katex *
  • torch >=1.13
examples/requirements.txt pypi
  • gym >=0.20.0,<0.24.0a0
  • matplotlib *
  • pandas *
  • pillow *
  • seaborn *
  • setproctitle *
  • torch >=1.13
  • torchrl *
  • torchvision *
  • torchviz *
requirements.txt pypi
  • graphviz *
  • numpy *
  • optree >=0.4.1
  • torch >=1.13
  • typing-extensions >=4.0.0
tests/requirements.txt pypi
  • black >=22.6.0 test
  • cpplint * test
  • doc8 <1.0.0a0 test
  • flake8 * test
  • flake8-bugbear * test
  • isort >=5.11.0 test
  • jax >=0.3 test
  • jaxopt * test
  • mypy >=0.990 test
  • optax * test
  • pre-commit * test
  • pydocstyle * test
  • pyenchant * test
  • pylint >=2.15.0 test
  • pytest * test
  • pytest-cov * test
  • pytest-xdist * test
  • torch >=1.13 test
  • types-setuptools * test
tutorials/requirements.txt pypi
  • ipykernel *
  • jax >=0.3
  • jaxopt *
  • optax *
  • torch >=1.13
  • torchvision *