pytorch-optimize

A simple black-box optimization framework to train your pytorch models for optimizing non-differentiable objectives

https://github.com/rajcscw/pytorch-optimize

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.3%) to scientific vocabulary

Keywords

blackbox-optimization evolution-strategies pytorch reinforcement-learning
Last synced: 6 months ago · JSON representation ·

Repository

A simple black-box optimization framework to train your pytorch models for optimizing non-differentiable objectives

Basic Info
  • Host: GitHub
  • Owner: rajcscw
  • License: mit
  • Language: Python
  • Default Branch: master
  • Homepage:
  • Size: 60.5 KB
Statistics
  • Stars: 11
  • Watchers: 2
  • Forks: 3
  • Open Issues: 0
  • Releases: 0
Topics
blackbox-optimization evolution-strategies pytorch reinforcement-learning
Created over 5 years ago · Last pushed almost 3 years ago
Metadata Files
Readme License Citation

README.md

pytorch-optimize CircleCI

pytorch-optimize is a simple black-box framework to train pytorch models for optimizing arbitrary objective functions. It provides simple wrappers for models and optimizers so that they can be used to optimize the provided objective function (including non-differentiable objectives). It also supports optimization of multiple objectives out-of-the-box. The optimizer itself is based on Evolution strategies which estimates gradient using parallel workers so that it can scale well utilizing multiple cores.

Install

git clone https://github.com/rajcscw/pytorch-optimize.git cd pytorch-optimize pip install .

Usage

1.Wrap your pytorch model (torch.nn.Module) using the Model class. The Model class automatically extracts the trainable parameters in the network and samples them at each training step. The sampling strategy can be changed by providing it as an argument to the Model class. Possible strategies include sampling layers from bottom to up, top to bottom, random or all the layers at once.

python from pytorch_optimize.model import Model, SamplingStrategy net = Net(..) model = Model(net=net, strategy=SamplingStrategy.BOTTOM_UP)

2.Provide an objective function (a callable) which takes the wrapped model and samples as its inputs. The objective function then should return a scalar value corresponding to the measurement of objective function. The objective function can also return a list of scalar values, in this case, it corresponds to multiple objective functions. Note, here Samples is just a simple dataclass for wrapping data for computing the objective function. For instance, in supervised learning, it contains inputs and targets. For reinforcement learning, this could be environments, seeds etc.

```python from pytorch_optimize.objective import Objective, Samples

class MyObjective(Objective): def call(self, model: Model, samples: Samples) -> List[float] # compute your objective function(s) return objectives

my_objective = MyObjective() ```

3.Create an instance of the ESOptimizer. This takes an instance of the wrapped model, SGD optimizer and the objective function. Additionally, you have to pass weights corresponding to each of the objective functions using obj_weights. Further, parameters sigma and n_samples for ES have to be passed. Internally, the objectives are subject to rank transformation so the scales of objective function(s) does not influence the optimization.

Note: The optimizer does gradient ascent instead of descent. Therefore, the objective functions needs to be implemented accordingly(for instance, returning 1/loss instead of loss).

python sgd_optimizer = torch.optim.SGD(net.parameters(), lr=1e-2) es_optimizer = ESOptimizer(model=model, sgd_optimizer=sgd_optimizer, objective_fn=my_objective, obj_weights=[1.0],sigma=1e-1, n_samples=100)

4.Write your usual training loop or trainer routine with the following template.

python for epoch in range(1000): samples = Samples(..) # wrap data es_optimizer.gradient_step(samples) # gradient step objective_at_epoch = MyObjective()(model, samples) # measure objective after stepping

Demo scripts

Two simple showcases: reinforcement learning and supervised learning are provided in the sample_scripts folder:

1.Supervised Learning: As an illustrative example, supervised.py shows training a classifier to classify MNIST digits by directly optimizing the accuracy rather than cross-entropy loss.

2.Reinforcement Learning: Similary, the script rl.py shows how to train an RL agent that tries to maximize the episodic reward it receives while solving the task cart pole balancing task. To run this script, install also gym.

Contributions

You are welcome to contribute to the repository by developing new features or fixing bugs. If you do so, please create a pull request.

Cite

If you use this repository for your research, please cite with the following bibtex:

@software{Ramamurthy_pytorch-optimize_is_a, author = {Ramamurthy, Rajkumar}, license = {MIT}, title = {{pytorch-optimize is a simple black-box optimisation framework}}, url = {https://github.com/rajcscw/pytorch-optimize}, version = {0.0.1} }

Owner

  • Name: Rajkumar Ramamurthy
  • Login: rajcscw
  • Kind: user
  • Location: Germany
  • Company: @fraunhofer-iais

Data Scientist/PhD candidate @ Fraunhofer IAIS

Citation (CITATION.cff)

cff-version: 1.2.0
title: 'pytorch-optimize is a simple black-box optimisation framework'
message: >-
  If you use this software, please cite it using the
  metadata from this file.
type: software
authors:
  - given-names: Rajkumar
    family-names: Ramamurthy
repository-code: 'https://github.com/rajcscw/pytorch-optimize'
abstract: "pytorch-optimize is a simple black-box framework to train pytorch models for optimizing arbitrary objective functions. It provides simple wrappers for models and optimizers so that they can be used to optimize the provided objective function (including non-differentiable objectives). It also supports optimization of multiple objectives out-of-the-box."
keywords:
  - pytorch
license: MIT
version: 0.0.1

GitHub Events

Total
  • Watch event: 1
Last Year
  • Watch event: 1

Committers

Last synced: over 1 year ago

All Time
  • Total Commits: 39
  • Total Committers: 1
  • Avg Commits per committer: 39.0
  • Development Distribution Score (DDS): 0.0
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Rajkumar R r****4@g****m 39

Issues and Pull Requests

Last synced: over 1 year ago

All Time
  • Total issues: 0
  • Total pull requests: 2
  • Average time to close issues: N/A
  • Average time to close pull requests: 33 minutes
  • Total issue authors: 0
  • Total pull request authors: 1
  • Average comments per issue: 0
  • Average comments per pull request: 0.0
  • Merged pull requests: 2
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
  • rajcscw (2)
Top Labels
Issue Labels
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 8 last-month
  • Total dependent packages: 0
  • Total dependent repositories: 1
  • Total versions: 1
  • Total maintainers: 1
pypi.org: pytorch-optimize

Package to train pytorch models for non-differentiable objectives

  • Versions: 1
  • Dependent Packages: 0
  • Dependent Repositories: 1
  • Downloads: 8 Last month
Rankings
Dependent packages count: 7.3%
Forks count: 16.9%
Stargazers count: 17.7%
Dependent repos count: 22.1%
Average: 26.7%
Downloads: 69.4%
Maintainers (1)
Last synced: 6 months ago

Dependencies

setup.py pypi
  • numpy *