carps

A Framework for Comparing N Hyperparameter Optimizers on M Benchmarks.

https://github.com/automl/carp-s

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
    5 of 12 committers (41.7%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.1%) to scientific vocabulary

Keywords from Contributors

automl hyperparameter-optimization algorithm-configuration automated-machine-learning bayesian-optimisation bayesian-optimization gaussian-process hyperparameter-search hyperparameter-tuning
Last synced: 7 months ago · JSON representation ·

Repository

A Framework for Comparing N Hyperparameter Optimizers on M Benchmarks.

Basic Info
Statistics
  • Stars: 17
  • Watchers: 6
  • Forks: 0
  • Open Issues: 50
  • Releases: 7
Created almost 3 years ago · Last pushed 7 months ago
Metadata Files
Readme Changelog Contributing License Citation Authors

README.md

Logo

CARP-S

Welcome to CARP-S! This repository contains a benchmarking framework for optimizers. It allows flexibly combining optimizers and benchmarks via a simple interface, and logging experiment results and trajectories to a database. carps can launch experiment runs in parallel by using hydra, which offers launchers for slurm/submitit, Ray, RQ, and joblib.

The main topics of this README are: - Installation - Minimal Example - Commands - Adding a new Optimizer or Benchmark

For more details on CARP-S, please have a look at the documentation or our blog post.

Installation

Installation from PyPI

To install CARP-S, you can simply use pip:

  1. Create virtual env with conda or uv

```bash

Conda

conda create -n carps python=3.12 conda activate carps

-OR -

uv

pip install uv export PIP="uv pip" # Env var needed for Makefile commands uv venv --python=3.12 carpsenv source carpsenv/bin/activate ```

  1. Install carps. bash pip install carps ### Installation from Source

If you want to install from source, you can clone the repository and install CARP-S via:

Conda

```bash git clone https://github.com/AutoML/CARP-S.git cd CARP-S export PIP="pip" conda create -n carps python=3.12 conda activate carps

Install for usage

$PIP install . ```

uv

```bash git clone https://github.com/AutoML/CARP-S.git cd CARP-S pip install uv export PIP="uv pip" uv venv --python=3.12 carpsenv source carpsenv/bin/activate

Install for usage

$PIP install .

Install as editable

$PIP install -e . ```

If you want to install CARP-S for development, you can use the following command (from the root of the repo): bash $PIP install -e . python -m carps.build.make install-dev

Apptainer

⚠ This is still experimental. You can also use a container as an env, see this guide.

A note on python versions

For python3.12, numpy should be numpy>=2.0.0. For python3.10, numpy must be numpy==1.26.4, you can simply pip install numpy==1.26.4 after running the proposed install commands.

Installing Benchmarks and Optimizers

Additionally, you need to install the requirements for the benchmark and optimizer that you want to use.

⚠ You can specify the directory of the task data by export CARPS_TASK_DATA_DIR=.... Please use absolute dirnames. The default location is <carps package location>/task_data. If you specify a custom dir, always export the env var. (The carps package location is the root of the package, not of the repo.)

For example, if you want to use the SMAC3 optimizer and the BBOB benchmark, you need to install the requirements for both of them via:

```bash

Install options for optimizers and benchmarks (these are Makefile commands, check the Makefile at carps/build for

more commands)

The commands should be separated by a whitespace

python -m carps.build.make benchmarkbbob optimizersmac ``` The benchmarks and optimizers can all be installed in one environment (tested with python3.12).

All possible install options for benchmarks are: benchmark_bbob benchmark_hpobench benchmark_hpob benchmark_mfpbench benchmark_pymoo benchmark_yahpo ⚠ Some benchmarks require to download surrogate models and/or containers and thus might take disk space and time to download.

All possible install options for optimizers are: optimizer_smac optimizer_dehb optimizer_nevergrad optimizer_optuna optimizer_ax optimizer_skopt optimizer_synetune All of the above except optimizer_hebo work with python3.12.

You can also install all benchmarks in one go with benchmarks and all optimizers with optimizers. Check the carps/build/Makefile in carps for more details.

Minimal Example

Once the requirements for both an optimizer and a benchmark, e.g. SMAC2.0 and BBOB, are installed, you can run one of the following minimal examples to benchmark SMAC2.0 on BBOB directly with Hydra:

```bash

Run SMAC BlackBoxFacade on certain BBOB task

python -m carps.run +optimizer/smac20=blackbox +task/BBOB=cfg410 seed=1 task.optimizationresources.n_trials=25

Run SMAC BlackBoxFacade on all available BBOB tasks for 10 seeds

python -m carps.run +optimizer/smac20=blackbox '+task/BBOB=glob(*)' 'seed=range(1,11)' -m ```

For the second command, the Hydra -m (or --multirun) option indicates that multiple runs will be performed over a range of parameter values. In this case, it's indicating that the benchmarking should be run for all available BBOB tasks (+task/BBOB=glob(*)) and for 10 different seed values (seed=range(1,11)).

Commands

For a complete list see the docs.

You can run a certain task and optimizer combination directly with Hydra via: bash python -m carps.run +task=... +optimizer=... seed=... -m

To check whether any runs are missing, you can use the following command. It will create a file runcommands_missing.sh containing the missing runs: bash python -m carps.utils.check_missing <rundir>

To collect all run data generated by the file logger into parquet files, use the following command: bash python -m carps.analysis.gather_data <rundir> The parquet files are then located in <rundir>. logs.parquet contain the trial info and values and logs_cfg.parquet contain the experiment configuration. The experiments can be matched via the column experiment_id.

CARPS and MySQL Database

Per default, carps logs to files. This has its caveats: Checking experiment status is a bit more cumbersome (but possible with python -m carps.utils.check_missing <rundir> to check for missing/failed experiments) and reading from the filesystem takes a long time. For this reason, we can also control and log experiments to a MySQL database with PyExperimenter. See the guide in the docs for information about how to set it up.

Adding a new Optimizer or Benchmark

For instructions on how to add a new optimizer or benchmark, please refer to the contributing guidelines for benchmarks and optimizers.

Using your (external) optimizer or benchmark

In the case when you are developing your optimizer or benchmark in a standalone package, you can use carps without directly working in the carps repo. For a custom benchmark we have an example repo. It shows how to use your own benchmark with carps optimizers. For a custom optimizer check this example repo. Information is also available here.

Evaluation Results

For each task_type (blackbox, multi-fidelity, multi-objective and multi-fidelity-multi-objective) and set (dev and test), we run selected optimizers and provide the data. Here we provide the link to the meta data that contains the detailed optimization setting for each run
and the running results that records the running results of each optimization-benchmark combination.

Owner

  • Name: AutoML-Freiburg-Hannover
  • Login: automl
  • Kind: organization
  • Location: Freiburg and Hannover, Germany

Citation (CITATION.cff)

---
cff-version: 1.2.0

message: "This is the message displayed when viewing the citation on github"

title: "CARP-S"
date-released: "<<date>>"

url: "https://automl.github.io/CARP-S/main/"

repository-code: "https://github.com/automl/CARP-S"

version: "1.0.4"

type: "template"
keywords:
  - "template"

<<requires::license license: "BSD license" endrequires::license>>

...

GitHub Events

Total
  • Create event: 30
  • Release event: 3
  • Issues event: 19
  • Watch event: 8
  • Delete event: 18
  • Issue comment event: 5
  • Member event: 2
  • Push event: 164
  • Pull request review comment event: 27
  • Pull request event: 45
  • Pull request review event: 26
Last Year
  • Create event: 30
  • Release event: 3
  • Issues event: 19
  • Watch event: 8
  • Delete event: 18
  • Issue comment event: 5
  • Member event: 2
  • Push event: 164
  • Pull request review comment event: 27
  • Pull request event: 45
  • Pull request review event: 26

Committers

Last synced: 9 months ago

All Time
  • Total Commits: 609
  • Total Committers: 12
  • Avg Commits per committer: 50.75
  • Development Distribution Score (DDS): 0.327
Past Year
  • Commits: 21
  • Committers: 3
  • Avg Commits per committer: 7.0
  • Development Distribution Score (DDS): 0.19
Top Committers
Name Email Commits
Carolin Benjamins c****s@a****e 410
Helena Graf h****f@a****e 115
Sarah Krebs s****s@a****e 33
Soham Basu s****7@g****m 13
dengdifan d****g@g****m 11
Alexander Tornede a****e@a****e 9
Eddie Bergman e****s@g****m 5
Sarah Segel 3****l 5
timruhkopf t****f@g****m 4
Tim Ruhkopf t****f@g****m 2
lhennig0103 1****3 1
Theresa Eimer t****r@a****e 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 7 months ago

All Time
  • Total issues: 38
  • Total pull requests: 81
  • Average time to close issues: 3 months
  • Average time to close pull requests: 4 days
  • Total issue authors: 7
  • Total pull request authors: 9
  • Average comments per issue: 0.29
  • Average comments per pull request: 0.15
  • Merged pull requests: 69
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 15
  • Pull requests: 44
  • Average time to close issues: 4 days
  • Average time to close pull requests: 4 days
  • Issue authors: 5
  • Pull request authors: 3
  • Average comments per issue: 0.07
  • Average comments per pull request: 0.0
  • Merged pull requests: 35
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • benjamc (38)
  • helegraf (5)
  • LukasFehring (2)
  • NielsRogge (1)
  • mwever (1)
  • daphne12345 (1)
  • lhennig0103 (1)
Pull Request Authors
  • benjamc (64)
  • Sohambasu07 (17)
  • sarah-segel (13)
  • eddiebergman (5)
  • dengdifan (4)
  • helegraf (4)
  • thibautklenke (4)
  • timruhkopf (3)
  • TheEimer (2)
Top Labels
Issue Labels
enhancement (8) bug (4) documentation (1)
Pull Request Labels
enhancement (3) bug (2)

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 36 last-month
  • Total dependent packages: 0
  • Total dependent repositories: 0
  • Total versions: 7
  • Total maintainers: 5
pypi.org: carps

CARP-S: Benchmarking N Optimizers on M Benchmarks

  • Documentation: https://carps.readthedocs.io/
  • License: BSD License Copyright (c) 2024, Carolin Benjamins All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
  • Latest release: 1.0.4
    published 9 months ago
  • Versions: 7
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 36 Last month
Rankings
Dependent packages count: 10.8%
Average: 35.8%
Dependent repos count: 60.8%
Last synced: 7 months ago