dpl
[NeurIPS 2023] Multi-fidelity hyperparameter optimization with deep power laws that achieves state-of-the-art results across diverse benchmarks.
Science Score: 26.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (14.5%) to scientific vocabulary
Keywords
Repository
[NeurIPS 2023] Multi-fidelity hyperparameter optimization with deep power laws that achieves state-of-the-art results across diverse benchmarks.
Basic Info
Statistics
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
- Releases: 0
Topics
Metadata Files
README.md
Scaling Laws for Hyperparameter Optimization
Hyperparameter optimization is an important subfield of machine learning that focuses on tuning the hyperparameters of a chosen algorithm to achieve peak performance. Recently, there has been a stream of methods that tackle the issue of hyperparameter optimization, however, most of the methods do not exploit the scaling law property of learning curves. In this work, we propose Deep Power Law (DPL), a neural network model conditioned to yield predictions that follow a power-law scaling pattern. Our model dynamically decides which configurations to pause and train incrementally by making use of multi-fidelity estimation. We compare our method against 7 state-of-the-art competitors on 3 benchmarks related to tabular, image, and NLP datasets covering 57 diverse search spaces. Our method achieves the best results across all benchmarks by obtaining the best any-time results compared to all competitors.
Authors: Arlind Kadra, Maciej Janowski, Martin Wistuba, Josif Grabocka
Setting up the virtual environment
```
The following commands assume the user is in the cloned directory
conda create -n dpl python=3.8 conda activate dpl cat requirements.txt | xargs -n 1 -L 1 pip install ```
Add the LCBench code & data
Copy the contents of https://github.com/automl/LCBench into a folder lc_bench in the root DPL repo.
From https://figshare.com/projects/LCBench/74151 download data_2k.zip and extract the json file into DPL/lc_bench/results/data_2k.json.
Running the Deep Power Laws (DPL) code
The entry script to running the experiment is main_experiment.py. The module can be used to start a full HPO search.
The main arguments for main_experiment.py are:
--index: The worker index. Every worker runs the same experiment, however, with a different seed.--fantasize_step: The step used in fantasizing the next learning curve value from the last observed one for a certain hyperparameter configuration.--budget_limit: The maximal number of HPO iterations.--ensemble_size: The ensemble size for the DPL surrogate.--nr_epochs: The number of epochs used to train (not refine) the HPO surrogate.--dataset_name: The name of the dataset used in the experiment. The dataset names must be matched with the benchmark they belong to.--benchmark_name: The name of the benchmark used in the experiment. Every benchmark offers its own distinctive datasets. Available options are lcbench, taskset and pd1.--surrogate_name: The method that will be run.--project_dir: The directory where the project files are located.--output_dir: The directory where the project output files will be stored.
A minimal example of running DPL:
``` python mainexperiment.py --index 1 --fantasizestep 1 --budgetlimit 1000 --ensemblesize 5 --nrepochs 250 --datasetname "credit-g" --benchmarkname "lcbench" --surrogatename "powerlaw" --projectdir "." --output_dir "."
```
The example above will run the first repetition (pertaining to the first seed) for a HPO budget of 1000 trials. It will use dataset credit-g from the lcbench benchmark. The experiment will run the power law surrogate with an ensemble size of 5 members, where we will run each selected hyperparameter configuration by the acquisition function with 1 more step. In the beginning and everytime that the training procedure is restarted, the models will be trained for 250 epochs. The script will consider the current folder as the project folder and it will save the output files at the current folder.
Plots
The plots that are included in our paper were generated from the functions in the module plots/normalized_regret.py.
The plots expect the following result folder structure:
``` resultsfolder benchmarkname methodname datasetname_repetitionid.json
```
Citation
@inproceedings{
kadra2023scaling,
title={Scaling Laws for Hyperparameter Optimization},
author={Arlind Kadra and Maciej Janowski and Martin Wistuba and Josif Grabocka},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ghzEUGfRMD}
}
Owner
- Name: Arlind Kadra
- Login: ArlindKadra
- Kind: user
- Location: Germany, Freiburg im Breisgau
- Company: Albert-Ludwigs-Universität Freiburg
- Website: https://relea.informatik.uni-freiburg.de/people/arlind-kadra
- Repositories: 2
- Profile: https://github.com/ArlindKadra
GitHub Events
Total
- Push event: 1
- Create event: 2
Last Year
- Push event: 1
- Create event: 2
Committers
Last synced: 11 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Arlind Kadra | a****a@g****m | 2 |
Issues and Pull Requests
Last synced: 11 months ago
All Time
- Total issues: 0
- Total pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Total issue authors: 0
- Total pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- Babel ==2.11.0
- ConfigSpace ==0.6.0
- Cython ==0.29.32
- HeapDict ==1.0.1
- Jinja2 ==3.1.2
- Keras-Preprocessing ==1.1.2
- Mako ==1.2.2
- Markdown ==3.3.6
- MarkupSafe ==2.1.1
- Pillow ==8.4.0
- PyYAML ==6.0
- Pygments ==2.13.0
- SQLAlchemy ==1.4.41
- Send2Trash ==1.8.0
- Sphinx ==5.3.0
- Werkzeug ==2.1.0
- absl-py ==1.0.0
- aiosignal ==1.2.0
- alabaster ==0.7.12
- alembic ==1.8.1
- argon2-cffi ==21.3.0
- argon2-cffi-bindings ==21.2.0
- asttokens ==2.0.8
- astunparse ==1.6.3
- atomicwrites ==1.4.1
- attrs ==21.4.0
- autograd ==1.4
- autopage ==0.5.1
- backcall ==0.2.0
- backports.functools-lru-cache ==1.6.4
- beautifulsoup4 ==4.11.1
- black ==22.3.0
- bleach ==5.0.1
- boto3 ==1.26.33
- botocore ==1.29.33
- botorch ==0.6.6
- cachetools ==5.0.0
- certifi ==2022.6.15
- cffi ==1.15.1
- charset-normalizer ==2.0.12
- click ==8.0.4
- cliff ==4.0.0
- cloudpickle ==2.2.0
- cmaes ==0.8.2
- cmd2 ==2.4.2
- colorama ==0.4.5
- coloredlogs ==15.0.1
- colorlog ==6.7.0
- contextlib2 ==21.6.0
- cramjam ==2.5.0
- cycler ==0.11.0
- dask ==2022.9.1
- debugpy ==1.6.3
- decorator ==5.1.1
- defusedxml ==0.7.1
- dill ==0.3.5.1
- distlib ==0.3.6
- distributed ==2022.9.1
- docutils ==0.17.1
- dragonfly-opt ==0.1.6
- entrypoints ==0.4
- etils ==0.6.0
- executing ==0.10.0
- fastjsonschema ==2.16.1
- fastparquet ==0.8.3
- filelock ==3.8.0
- flake8 ==5.0.4
- flatbuffers ==1.12
- fonttools ==4.29.1
- frozenlist ==1.3.1
- fsspec ==2022.8.2
- future ==0.18.2
- gast ==0.4.0
- google-auth ==2.6.2
- google-auth-oauthlib ==0.4.6
- google-pasta ==0.2.0
- googleapis-common-protos ==1.56.2
- gpytorch ==1.8.1
- greenlet ==1.1.3
- grpcio ==1.43.0
- h5py ==3.6.0
- humanfriendly ==10.0
- idna ==3.3
- imagesize ==1.4.1
- importlib-metadata ==4.11.4
- importlib-resources ==5.9.0
- iniconfig ==1.1.1
- ipykernel ==6.15.1
- ipython ==8.4.0
- ipython-genutils ==0.2.0
- ipywidgets ==8.0.1
- jedi ==0.18.1
- jmespath ==1.0.1
- joblib ==1.1.0
- jsonschema ==4.13.0
- jupyter-client ==7.3.4
- jupyter_core ==4.11.1
- jupyterlab-pygments ==0.2.2
- jupyterlab-widgets ==3.0.2
- keras ==2.9.0
- kiwisolver ==1.3.2
- latexcodec ==2.0.1
- lcdb ==0.1.0
- liac-arff ==2.5.0
- libclang ==13.0.0
- locket ==1.0.0
- loguru ==0.6.0
- lxml ==4.9.1
- markdown-it-py ==2.1.0
- matplotlib ==3.5.1
- matplotlib-inline ==0.1.6
- mccabe ==0.7.0
- mdit-py-plugins ==0.3.3
- mdurl ==0.1.2
- minio ==7.1.5
- mistune ==0.8.4
- mkl-fft ==1.3.1
- mkl-random ==1.2.2
- mkl-service ==2.4.0
- mpmath ==1.2.1
- msgpack ==1.0.4
- multipledispatch ==0.6.0
- multiprocess ==0.70.13
- mypy-extensions ==0.4.3
- myst-parser ==0.18.1
- nas-bench-201 ==2.1
- nbclient ==0.6.6
- nbconvert ==6.5.3
- nbformat ==5.4.0
- nest-asyncio ==1.5.5
- notebook ==6.4.12
- numpy ==1.21.5
- oauthlib ==3.2.0
- olefile ==0.46
- onnxruntime ==1.12.1
- openml ==0.12.2
- opt-einsum ==3.3.0
- optuna ==3.0.2
- packaging ==21.3
- pandas ==1.4.1
- pandocfilters ==1.5.0
- parso ==0.8.3
- partd ==1.3.0
- pathos ==0.2.9
- pathspec ==0.10.0
- patsy ==0.5.2
- pbr ==5.10.0
- pickleshare ==0.7.5
- pip ==22.2.2
- pkgutil_resolve_name ==1.3.10
- platformdirs ==2.5.2
- pluggy ==1.0.0
- pox ==0.3.1
- ppft ==1.7.6.5
- prettytable ==3.4.1
- prometheus-client ==0.14.1
- promise ==2.3
- prompt-toolkit ==3.0.30
- protobuf ==3.19.4
- protobuf3-to-dict ==0.1.5
- psutil ==5.9.1
- pure-eval ==0.2.2
- py ==1.11.0
- pyaml ==21.10.1
- pyarrow ==7.0.0
- pyasn1 ==0.4.8
- pyasn1-modules ==0.2.8
- pybtex ==0.24.0
- pybtex-docutils ==1.0.2
- pycodestyle ==2.9.1
- pycparser ==2.21
- pyflakes ==2.5.0
- pyparsing ==3.0.9
- pyperclip ==1.8.2
- pyreadline3 ==3.4.1
- pyro-api ==0.1.2
- pyro-ppl ==1.8.0
- pyrsistent ==0.18.1
- pytest ==7.1.2
- pytest-timeout ==2.1.0
- python-dateutil ==2.8.2
- pytz ==2021.3
- pywin32 ==304
- pywinpty ==2.0.7
- pyzmq ==23.2.1
- ray ==2.0.0
- requests ==2.27.1
- requests-oauthlib ==1.3.1
- rsa ==4.8
- s3fs ==0.4.2
- s3transfer ==0.6.0
- sagemaker ==2.125.0
- schema ==0.7.5
- scikit-learn ==1.0.2
- scikit-optimize ==0.9.0
- scipy ==1.8.0
- seaborn ==0.11.2
- setuptools ==58.0.4
- six ==1.16.0
- sklearn ==0.0
- smdebug-rulesconfig ==1.0.1
- snowballstemmer ==2.2.0
- sortedcontainers ==2.4.0
- soupsieve ==2.3.2.post1
- sphinx-copybutton ==0.5.1
- sphinx-rtd-theme ==1.1.1
- sphinx_autodoc_typehints ==1.19.5
- sphinxcontrib-applehelp ==1.0.2
- sphinxcontrib-bibtex ==2.5.0
- sphinxcontrib-devhelp ==1.0.2
- sphinxcontrib-htmlhelp ==2.0.0
- sphinxcontrib-jsmath ==1.0.1
- sphinxcontrib-qthelp ==1.0.3
- sphinxcontrib-serializinghtml ==1.1.5
- stack-data ==0.4.0
- statsmodels ==0.13.2
- stevedore ==4.0.0
- sympy ==1.11.1
- syne-tune ==0.3.3
- tabulate ==0.8.10
- tblib ==1.7.0
- tensorboard ==2.9.1
- tensorboard-data-server ==0.6.1
- tensorboard-plugin-wit ==1.8.1
- tensorboardX ==2.5.1
- tensorflow ==2.9.1
- tensorflow-datasets ==4.6.0
- tensorflow-estimator ==2.9.0
- tensorflow-io ==0.26.0
- tensorflow-io-gcs-filesystem ==0.26.0
- tensorflow-metadata ==1.8.0
- termcolor ==1.1.0
- terminado ==0.15.0
- tf-estimator-nightly ==2.8.0.dev2021122109
- threadpoolctl ==3.1.0
- tinycss2 ==1.1.1
- toml ==0.10.2
- tomli ==2.0.1
- toolz ==0.12.0
- torch ==1.10.2
- torchaudio ==0.10.2
- torchvision ==0.11.3
- tornado ==6.1
- tqdm ==4.64.0
- traitlets ==5.3.0
- typing-extensions ==3.10.0.2
- ujson ==5.4.0
- urllib3 ==1.26.9
- virtualenv ==20.16.4
- wcwidth ==0.2.5
- webencodings ==0.5.1
- wheel ==0.37.1
- widgetsnbextension ==4.0.2
- win32-setctime ==1.1.0
- wincertstore ==0.2
- wrapt ==1.14.0
- xgboost ==1.6.2
- xmltodict ==0.12.0
- yahpo-gym ==1.0.1
- zict ==2.2.0
- zipp ==3.8.1
- ConfigSpace >=0.4.16
- dask >=2.27.0
- distributed >=2.27.0
- loguru >=0.5.3
- numpy >=1.18.2