skwdro

Distributionally robust machine learning with Pytorch and Scikit-learn wrappers

https://github.com/iutzeler/skwdro

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (9.9%) to scientific vocabulary

Keywords

distributionally-robust-optimization machine-learning statistical-learning
Last synced: 6 months ago · JSON representation ·

Repository

Distributionally robust machine learning with Pytorch and Scikit-learn wrappers

Basic Info
Statistics
  • Stars: 15
  • Watchers: 4
  • Forks: 0
  • Open Issues: 7
  • Releases: 5
Topics
distributionally-robust-optimization machine-learning statistical-learning
Created almost 3 years ago · Last pushed 6 months ago
Metadata Files
Readme License Code of conduct Citation

README.md

CI Test Workflow Test
Style Workflow Style
Doc Workflow Doc
Doc Readthedocs
Checks Code style
Types
Build
Install Pip PyPI - Python Version
Conda
Github
Cite

SkWDRO - Wasserstein Distributionaly Robust Optimization

Model robustification with thin interface

You can make pigs fly, [Kolter&Madry, 2018]

Python PyTorch Scikit Learn License

skwdro is a Python package that offers WDRO versions for a large range of estimators, either by extending scikit-learn estimator or by providing a wrapper for pytorch modules.

Have a look at skwdro documentation!

Getting started with skwdro

Installation

Development mode with hatch

First install hatch and clone the archive. In the root folder, make shell gives you an interactive shell in the correct environment and make test runs the tests (it can be launched from both an interactive shell and a normal shell). make reset_env removes installed environments (useful in case of troubles).

With pip

skwdro will be available on PyPi soon, for now only the development mode is available.

First steps with skwdro

scikit-learn interface

Robust estimators from skwdro can be used as drop-in replacements for scikit-learn estimators (they actually inherit from scikit-learn estimators and classifier classes.). skwdro provides robust estimators for standard problems such as linear regression or logistic regression. LinearRegression from skwdro.linear_model is a robust version of LinearRegression from scikit-learn and be used in the same way. The only difference is that now an uncertainty radius rho is required.

We assume that we are given X_train of shape (n_train, n_features) and y_train of shape (n_train,) as training data and X_test of shape (n_test, n_features) as test data.

```python from skwdro.linear_model import LinearRegression

Uncertainty radius

rho = 0.1

Fit the model

robustmodel = LinearRegression(rho=rho) robustmodel.fit(Xtrain, ytrain)

Predict the target values

ypred = robustmodel.predict(X_test) ` You can refer to the documentation to explore the list ofskwdro``'s already-made estimators.

pytorch interface

Didn't find a estimator that suits you? You can compose your own using the pytorch interface: it allows more flexibility, custom models and optimizers.

Assume now that the data is given as a dataloader train_loader.

```python import torch import torch.nn as nn import torch.optim as optim

from skwdro.torch import robustify

Uncertainty radius

rho = 0.1

Define the model

model = nn.Linear(n_features, 1)

Define the loss function

loss_fn = nn.MSELoss()

Define a sample batch for initialization

samplebatchx, samplebatchy = next(iter(train_loader))

Robust loss

robustloss = robustify(lossfn, model, rho, samplebatchx, samplebatchy)

Define the optimizer

optimizer = optim.Adam(model.parameters(), lr=0.01)

Training loop

for epoch in range(100): for batchx, batchy in trainloader: optimizer.zerograd() loss = robustloss(model(batchx), batch_y) loss.backward() optimizer.step() ```

You will find detailed description on how to robustify modules in the documentation.

Cite

skwdro is the result of a research project. It is licensed under BSD 3-Clause. You are free to use it and if you do so, please cite

bibtex @article{vincent2024skwdro, title={skwdro: a library for Wasserstein distributionally robust machine learning}, author={Vincent, Florian and Azizian, Wa{\"\i}ss and Iutzeler, Franck and Malick, J{\'e}r{\^o}me}, journal={arXiv preprint arXiv:2410.21231}, year={2024} }

Citation (CITATION.bib)

@article{vincent2024skwdro,
  title={skwdro: a library for Wasserstein distributionally robust machine learning},
  author={Vincent, Florian and Azizian, Wa{\"\i}ss and Iutzeler, Franck and Malick, J{\'e}r{\^o}me},
  journal={arXiv preprint arXiv:2410.21231},
  year={2024}
}

GitHub Events

Total
  • Create event: 19
  • Issues event: 4
  • Release event: 2
  • Watch event: 8
  • Delete event: 14
  • Issue comment event: 12
  • Push event: 37
  • Pull request review event: 15
  • Pull request event: 46
Last Year
  • Create event: 19
  • Issues event: 4
  • Release event: 2
  • Watch event: 8
  • Delete event: 14
  • Issue comment event: 12
  • Push event: 37
  • Pull request review event: 15
  • Pull request event: 46

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 1
  • Total pull requests: 5
  • Average time to close issues: N/A
  • Average time to close pull requests: about 7 hours
  • Total issue authors: 1
  • Total pull request authors: 2
  • Average comments per issue: 0.0
  • Average comments per pull request: 0.0
  • Merged pull requests: 4
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 1
  • Pull requests: 5
  • Average time to close issues: N/A
  • Average time to close pull requests: about 7 hours
  • Issue authors: 1
  • Pull request authors: 2
  • Average comments per issue: 0.0
  • Average comments per pull request: 0.0
  • Merged pull requests: 4
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • floffy-f (6)
  • iutzeler (2)
  • wazizian (2)
Pull Request Authors
  • floffy-f (22)
  • iutzeler (21)
  • wazizian (7)
Top Labels
Issue Labels
enhancement (1) help wanted (1)
Pull Request Labels
bug (7) enhancement (7) Test (6) release (4) wontfix (2) DO NOT MERGE (2) documentation (1) help wanted (1) low priority (1)

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 34 last-month
  • Total dependent packages: 0
  • Total dependent repositories: 0
  • Total versions: 5
  • Total maintainers: 1
pypi.org: skwdro

A Robust ML toolbox

  • Versions: 5
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 34 Last month
Rankings
Dependent packages count: 10.7%
Downloads: 12.8%
Average: 27.8%
Dependent repos count: 60.0%
Maintainers (1)
Last synced: 6 months ago

Dependencies

.github/workflows/test.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
pyproject.toml pypi
  • cvxopt *
  • cvxpy *
  • dask [distributed]
  • mechanic-pytorch *
  • numpy *
  • pandas *
  • prodigyopt *
  • scikit-learn @ https://pypi.anaconda.org/scientific-python-nightly-wheels/simple/scikit-learn/1.4.dev0/scikit_learn-1.4.dev0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
  • scipy *
  • sqwash *
  • torch *
requirements.txt pypi
  • cvxopt *
  • cvxpy *
  • matplotlib *
  • numpy *
  • scikit-learn *
  • scipy *
  • torch *
doc/requirements.txt pypi
  • numpydoc *
  • sphinx-gallery *
  • sphinx_rtd_theme *
environment.yml pypi