lcbench

A learning curve benchmark on OpenML data

https://github.com/automl/lcbench

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
    2 of 5 committers (40.0%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.8%) to scientific vocabulary
Last synced: 7 months ago · JSON representation ·

Repository

A learning curve benchmark on OpenML data

Basic Info
  • Host: GitHub
  • Owner: automl
  • License: apache-2.0
  • Language: Jupyter Notebook
  • Default Branch: master
  • Size: 273 KB
Statistics
  • Stars: 30
  • Watchers: 6
  • Forks: 9
  • Open Issues: 5
  • Releases: 0
Created about 6 years ago · Last pushed over 1 year ago
Metadata Files
Readme License Citation

README.md

LCBench

A learning curve benchmark on openml data.

Dataset overview

LCBench provides extensive training data for different architectures and hyperparameters evaluated on OpenML datasets. The current version provides 2000 configurations, each evaluated on 35 datasets over 50 epochs. Logs include for each epoch:

  • Training, test and validation losses
  • Training, test and validation accuracy
  • Training, test and validation balanced accuracy
  • Global gradient statistics (max, mean, median, norm, std, q10, q25, q75, q90)
  • Layer-wise gradient statistics (max, mean, median, norm, std, q10, q25, q75, q90)
  • Learning rate
  • Runtime

And additionally:

  • Configuration (architecture, hyperparameters)
  • Number of model parameters
  • Dataset statistics (number of classes, instances and features)

The data was created using Auto-PyTorch. All runs feature funnel-shaped MLP nets and use SGD with cosine annealing without restarts. Overall, 7 parameters were sampled at random (4 float, 3 integer). These are:

  • Batch size: [16, 512], on log-scale
  • Learning rate: [1e-4, 1e-1], on log-scale
  • Momentum: [0.1, 0.99]
  • Weight decay: [1e-5, 1e-1]
  • Number of layers: [1, 4]
  • Maximum number of units per layer: [64, 1024], on log-scale
  • Dropout: [0.0, 1.0]

Setup

Clone the git repository:

sh $ cd install/path $ git clone ... $ cd LCBench

Install requirements:

sh $ cat requirements.txt | xargs -n 1 -L 1 pip install

Downloading the data

You can download the data on figshare. Lightweight versions are indicated by 'lw'. Futhermore, you can find the meta-features for all datasets in the same project.

Quickstart

Loading the data:

```py from LCBench import Benchmark

bench = Benchmark(data_dir="path/to/data.json") ```

Querying:

py bench.query(dataset_name="credit-g", tag="Train/loss", config_id=0)

Listing available tags:

py bench.get_queriable_tags()

Note: Tags starting with "Train/" indicate metrics that are logged every epoch.

Examples

An extended introduction is given in the jupyter notebook example in API Example. For documentation you can also call help on the API methods or check the source.

Tasks for the DL lecture 19/20

For the final project of the DL lecture, default tasks are defined in notebooks. Each notebook contains a short description of the task and a very basic example.

Leaderboard for Default Project

https://docs.google.com/spreadsheets/d/1igH18oFYT5yMNhbqJSVOiG-7SjZ0owvEn5sFJ-nxHDE/edit#gid=0

Citation

@article { ZimLin2021a, author = {Lucas Zimmer and Marius Lindauer and Frank Hutter}, title = {Auto-PyTorch Tabular: Multi-Fidelity MetaLearning for Efficient and Robust AutoDL}, journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence}, year = {2021}, volume = {43}, number = {9}, pages = {3079 - 3090} }

Owner

  • Name: AutoML-Freiburg-Hannover
  • Login: automl
  • Kind: organization
  • Location: Freiburg and Hannover, Germany

Citation (CITATION.cff)

@article { ZimLin2021a,
  author = {Lucas Zimmer and Marius Lindauer and Frank Hutter},
  title = {Auto-PyTorch Tabular: Multi-Fidelity MetaLearning for Efficient and Robust AutoDL},
  journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year = {2021},
  volume = {43},
  number = {9},
  pages = {3079 - 3090}
}

GitHub Events

Total
  • Issues event: 1
  • Watch event: 1
  • Fork event: 1
Last Year
  • Issues event: 1
  • Watch event: 1
  • Fork event: 1

Committers

Last synced: 9 months ago

All Time
  • Total Commits: 21
  • Total Committers: 5
  • Avg Commits per committer: 4.2
  • Development Distribution Score (DDS): 0.524
Past Year
  • Commits: 1
  • Committers: 1
  • Avg Commits per committer: 1.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
LMZimmer z****s@w****e 10
LMZimmer 5****r 5
Marius Lindauer m****s@g****m 3
Lucas Zimmer z****l@i****e 2
Baohe Zhang b****g@s****e 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 9 months ago

All Time
  • Total issues: 3
  • Total pull requests: 1
  • Average time to close issues: N/A
  • Average time to close pull requests: 1 minute
  • Total issue authors: 3
  • Total pull request authors: 1
  • Average comments per issue: 0.0
  • Average comments per pull request: 0.0
  • Merged pull requests: 1
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 1
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 1
  • Pull request authors: 0
  • Average comments per issue: 0.0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • LabChameleon (1)
  • hvarfner (1)
  • vamp-ire-tap (1)
  • janakan97 (1)
Pull Request Authors
  • 2BH (1)
Top Labels
Issue Labels
Pull Request Labels

Dependencies

requirements.txt pypi
  • gzip *
  • matplotlib *
  • numpy *
  • pickle *
  • setuptools *