syne-tune
Large scale and asynchronous Hyperparameter and Architecture Optimization at your fingertips.
Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (15.5%) to scientific vocabulary
Keywords
Repository
Large scale and asynchronous Hyperparameter and Architecture Optimization at your fingertips.
Basic Info
- Host: GitHub
- Owner: syne-tune
- License: apache-2.0
- Language: Python
- Default Branch: main
- Homepage: https://syne-tune.readthedocs.io
- Size: 10.9 MB
Statistics
- Stars: 412
- Watchers: 10
- Forks: 56
- Open Issues: 17
- Releases: 17
Topics
Metadata Files
README.md
Syne Tune: Large-Scale and Reproducible Hyperparameter Optimization

Documentation | Blackboxes | Benchmarking | API Reference | PyPI | Latest Blog Post | Discord
Syne Tune is a library for large-scale hyperparameter optimization (HPO) with the following key features:
State-of-the-art HPO methods for multi-fidelity optimization, multi-objective optimization, transfer learning, and population-based training.
Tooling that lets you run large-scale experimentation either locally or on SLURM clusters.
Extensive collection of blackboxes including surrogate and tabular benchmarks for efficient HPO simulation.
Installing
To install Syne Tune from pip:
bash
pip install 'syne-tune[extra]'
or to install the latest version from source:
bash
git clone https://github.com/awslabs/syne-tune.git
cd syne-tune
pip install -e '.[extra]'
See our change log to see what changed in the latest version.
Getting started
Syne Tune assumes some python script that given hyperparameter as input arguments trains and validates a machine learning model that somewhat follows this pattern:
```python from argparse import ArgumentParser
if name == 'main': parser = ArgumentParser() parser.addargument('--epochs', type=int) parser.addargument('--hyperparameter1', type=float) parser.addargument('--hyperparameter3', type=float) args, _ = parser.parseknown_args() # instantiate your machine learning model for epoch in range(args.epochs): # training loop # train for some steps or epoch ... # validate your model on some hold-out validation data ```
Step 1: Adapt your training script
First, to enable tuning of your training script, you need to report metrics so they can be communicated to Syne Tune.
For example, in the script above, we assume you're tuning two hyperparameters — height and width — to minimize a loss function.
To report the loss back to Syne Tune after each epoch, simply add report(epoch=epoch, loss=loss) inside your training loop:
```python
trainheightsimple.py
import logging import time
from syne_tune import Reporter from argparse import ArgumentParser
if name == 'main': root = logging.getLogger() root.setLevel(logging.INFO) parser = ArgumentParser() parser.addargument('--epochs', type=int) parser.addargument('--width', type=float) parser.addargument('--height', type=float) args, _ = parser.parseknownargs() report = Reporter() for step in range(args.epochs): time.sleep(0.1) dummyscore = 1.0 / (0.1 + args.width * step / 100) + args.height * 0.1 # Feed the score back to Syne Tune. report(epoch=step + 1, meanloss=dummyscore) ```
Step 2: Define a launching script
Once the training script is prepared, we first define the search space and then start the tuning process. In this example, we launch ASHA for a total of 30 seconds using four workers. Each worker spawns a separate Python process to evaluate a hyperparameter configuration, meaning that four configurations are trained in parallel.
```python
launchheightsimple.py
from synetune import Tuner, StoppingCriterion from synetune.backend import LocalBackend from synetune.configspace import randint from syne_tune.optimizer.baselines import ASHA
hyperparameter search space to consider
config_space = { 'width': randint(1, 20), 'height': randint(1, 20), 'epochs': 100, }
tuner = Tuner( trialbackend=LocalBackend(entrypoint='trainheightsimple.py'), scheduler=ASHA( configspace, metric='meanloss', timeattr='epoch', ), stopcriterion=StoppingCriterion(maxwallclocktime=30), # total runtime in seconds n_workers=4, # how many trials are evaluated in parallel ) tuner.run() ```
Step 3: Plot the results
Next, we can plot the results as follows. Replace TUNER_NAME with the name of the tuning job
used earlier — this is shown at the beginning of the logs.
```python import matplotlib.pyplot as plt from synetune.experiments import loadexperiment
e = loadexperiment('TUNERNAME') # name of the tuning run which is printed at the beginning of the run e.plottrialsovertime(metrictoplot='meanloss') plt.show() ```
Benchmarking
Checkout this tutorial to run large-scale benchmarking with Syne Tune.
Blog Posts
- Run distributed hyperparameter and neural architecture tuning jobs with Syne Tune
- Hyperparameter optimization for fine-tuning pre-trained transformer models from Hugging Face (notebook)
- Learn Amazon Simple Storage Service transfer configuration with Syne Tune (code)
Videos
- Martin Wistuba: Hyperparameter Optimization for the Impatient (PyData 2023)
- David Salinas: Syne Tune: A Library for Large-Scale Hyperparameter Tuning and Reproducible Research (AutoML Seminar)
Contributing
See CONTRIBUTING for more information.
Citing Syne Tune
If you use Syne Tune in a scientific publication, please cite the following paper:
"Syne Tune: A Library for Large Scale Hyperparameter Tuning and Reproducible Research" First Conference on Automated Machine Learning, 2022.
bibtex
@inproceedings{
salinas2022syne,
title={Syne Tune: A Library for Large Scale Hyperparameter Tuning and Reproducible Research},
author={David Salinas and Matthias Seeger and Aaron Klein and Valerio Perrone and Martin Wistuba and Cedric Archambeau},
booktitle={International Conference on Automated Machine Learning, AutoML 2022},
year={2022},
url={https://proceedings.mlr.press/v188/salinas22a.html}
}
License
This project is licensed under the Apache-2.0 License.
Owner
- Name: syne-tune
- Login: syne-tune
- Kind: organization
- Repositories: 1
- Profile: https://github.com/syne-tune
Citation (CITATION.cff)
cff-version: 1.2.0
title: "Syne Tune: A Library for Large Scale Hyperparameter Tuning and Reproducible Research"
message: "If you use Syne Tune in your project, please cite our paper."
authors:
- name: The Syne Tune Team
repository-code: "https://github.com/awslabs/syne-tune"
license: Apache-2.0
preferred-citation:
authors:
- given-names: David
family-names: Salinas
- family-names: Seeger
given-names: Matthias
- given-names: Aaron
family-names: Klein
- family-names: Perrone
given-names: Valerio
- given-names: Martin
family-names: Wistuba
- given-names: Cedric
family-names: Archambeau
title: "Syne Tune: A Library for Large Scale Hyperparameter Tuning and Reproducible Research"
type: conference-paper
year: 2022
collection-title: "International Conference on Automated Machine Learning, AutoML 2022"
url: "https://proceedings.mlr.press/v188/salinas22a.html"
GitHub Events
Total
- Fork event: 7
- Create event: 59
- Commit comment event: 1
- Release event: 2
- Issues event: 34
- Watch event: 22
- Delete event: 152
- Member event: 1
- Issue comment event: 36
- Push event: 204
- Pull request review comment event: 112
- Pull request review event: 179
- Pull request event: 130
Last Year
- Fork event: 7
- Create event: 59
- Commit comment event: 1
- Release event: 2
- Issues event: 34
- Watch event: 22
- Delete event: 152
- Member event: 1
- Issue comment event: 36
- Push event: 204
- Pull request review comment event: 112
- Pull request review event: 179
- Pull request event: 130
Issues and Pull Requests
Last synced: 4 months ago
All Time
- Total issues: 19
- Total pull requests: 62
- Average time to close issues: about 1 month
- Average time to close pull requests: 10 days
- Total issue authors: 5
- Total pull request authors: 4
- Average comments per issue: 0.32
- Average comments per pull request: 0.18
- Merged pull requests: 37
- Bot issues: 0
- Bot pull requests: 2
Past Year
- Issues: 19
- Pull requests: 62
- Average time to close issues: about 1 month
- Average time to close pull requests: 10 days
- Issue authors: 5
- Pull request authors: 4
- Average comments per issue: 0.32
- Average comments per pull request: 0.18
- Merged pull requests: 37
- Bot issues: 0
- Bot pull requests: 2
Top Authors
Issue Authors
- aaronkl (11)
- geoalgo (3)
- ralf-koenig (3)
- quchongqi (2)
- ChongqiQu (2)
- Kavlahkaff (2)
- ltiao (1)
- one2clouds (1)
- austinmw (1)
Pull Request Authors
- aaronkl (63)
- dependabot[bot] (20)
- Kavlahkaff (10)
- geoalgo (9)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 1
-
Total downloads:
- pypi 7,281 last-month
- Total dependent packages: 0
- Total dependent repositories: 3
- Total versions: 21
- Total maintainers: 3
pypi.org: syne-tune
Distributed Hyperparameter Optimization
- Documentation: https://syne-tune.readthedocs.io/
- License: Apache Software License
-
Latest release: 0.14.2
published 6 months ago
Rankings
Maintainers (3)
Dependencies
- aws-actions/configure-aws-credentials v1 composite
- actions/checkout v3 composite
- actions/setup-python v4 composite
- actions/checkout v3 composite
- actions/setup-python v4 composite
- actions/checkout v2 composite
- actions/setup-python v2 composite
- aws-actions/configure-aws-credentials v1 composite
- actions/checkout v3 composite
- actions/setup-python v4 composite
- botorch *
- coolname *
- fastparquet *
- h5py *
- numpy >=1.16.0,<1.24.0
- pandas *
- s3fs *
- scikit-learn *
- syne-tune *
- tqdm *
- xgboost *
- actions/checkout v4 composite
- actions/setup-python v4 composite
- release-drafter/release-drafter 65c5fb495d1e69aa8c08a3317bc44ff8aabe9772 composite
- actions/checkout v4 composite
- actions/setup-python v4 composite
- actions/upload-artifact a8a3f3ad30e3422c9c7b888a15615d19a852ae32 composite
- actions/checkout v4 composite
- zgosalvez/github-actions-ensure-sha-pinned-actions f32435541e24cd6a4700a7f52bb2ec59e80603b1 composite
- actions/checkout v4 composite
- syne-tune *
- tqdm *
- syne-tune *
- tqdm *
- syne-tune *
- tqdm *
- syne-tune *
- tqdm *
- syne-tune *
- tqdm *
- syne-tune *
- tqdm *
- syne-tune *
- tqdm *
- syne-tune *
- tqdm *
- syne-tune *
- tqdm *
- syne-tune *
- tqdm *
- ${DLAMI_REGISTRY_ID}.dkr.ecr.${REGION}.amazonaws.com/pytorch-training ${VERSION}-${CONTEXT}-ubuntu20.04-sagemaker build
- botorch *
- syne-tune *
- tqdm *
- syne-tune *
- tqdm *
- filelock *
- syne-tune *
- tqdm *
- syne-tune *
- tqdm *
- autograd >=1.3
- boto3 ==1.26.49
- botorch >=0.7.2
- configspace ==0.6.1
- matplotlib ==3.6.3
- mrg32k3a ==1.0.0
- numpy ==1.23.5
- onnxruntime ==1.13.1
- pandas ==1.5.2
- pytest ==7.2.0
- pyyaml ==6.0
- sagemaker ==2.128.0
- scipy >=1.3.3
- simoptlib ==1.0.1
- syne-tune *
- xgboost ==1.7.3
- yahpo-gym ==1.0.1
- autograd >=1.3
- botorch >=0.7.2
- configspace ==0.6.1
- matplotlib ==3.6.3
- mrg32k3a ==1.0.0
- numpy ==1.23.5
- onnxruntime ==1.13.1
- pandas ==1.5.2
- pytest ==7.2.0
- pyyaml ==6.0
- scipy >=1.3.3
- simoptlib ==1.0.1
- syne-tune *
- xgboost ==1.7.3
- yahpo-gym ==1.0.1
- autograd >=1.3
- botorch >=0.7.2
- configspace ==0.6.1
- matplotlib ==3.6.3
- mrg32k3a ==1.0.0
- numpy ==1.23.5
- onnxruntime ==1.13.1
- pandas ==1.5.2
- pyyaml ==6.0
- scipy >=1.3.3
- simoptlib ==1.0.1
- syne-tune *
- xgboost ==1.7.3
- yahpo-gym ==1.0.1
- pandas ==1.5.2
- scikit-learn ==1.1.3
- xgboost ==1.6.2
- syne-tune * test
- tqdm * test
- sortedcontainers * test
- syne-tune * test
- tqdm * test
- datasets ==1.8.0
- transformers *
- datasets ==1.8.0
- transformers *
- filelock *
- filelock *
- torch *
- torchvision *
- filelock *
- tqdm *
- filelock *