stochastic-benchmark

Repository for Stochastic Optimization Solvers Benchmark code

https://github.com/usra-riacs/stochastic-benchmark

Science Score: 36.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org, aps.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.0%) to scientific vocabulary

Keywords

benchmark optimization
Last synced: 6 months ago · JSON representation

Repository

Repository for Stochastic Optimization Solvers Benchmark code

Basic Info
Statistics
  • Stars: 7
  • Watchers: 4
  • Forks: 4
  • Open Issues: 3
  • Releases: 0
Topics
benchmark optimization
Created over 4 years ago · Last pushed 7 months ago
Metadata Files
Readme License Citation

README.md

Window Sticker - Stochastic Benchmark

CI codecov Python 3.9+ License

Repository for Stochastic Optimization Solvers Benchmark code

Repository for Stochastic Optimization Solvers Benchmark implementation of the Window Sticker framework.

The benchmarking approach is described in this preprint titled: Benchmarking the Operation of Quantum Heuristics and Ising Machines: Scoring Parameter Setting Strategies on Optimization Applications.

Details of the implementation and an illustrative example for Wishart instances found here are given in this document.

Table of Contents

Background

This code has been created in order to produce a set of plots that inform the performance of parameterized stochastic optimization solvers when addressing a well-established family of optimization problems. These plots are produced based on experimental data from the execution of such solvers in seen instances of the problem family and evaluated further in an unseen subset of problems. More details of the methodology have been presented in the APS March meeting and INFORMS Annual meeting conferences. A manuscript explaining the methodology is in preparation. The performance plot, or as we like to call it Window Sticker, is a graphical representation of the expected performance of a solution method or parameter setting strategy with an unseen instance from the same problem family that it is generated aiming to answer the question With X% confidence, will we find a solution with Y quality after using R resource? Consider that the quality metric and the resource values can be arbitrary functions of the parameters and performance of the given solver, providing a flexible analysis tool for its performance.

The current package implements the following functionality: - Parsing results from files from parameterized stochastic solvers such as PySA and D-Wave ocean tools. - Through bootstrapping and downsampling, simulate the lower data performance for such solvers. - Compute best-recommended parameters based on aggregated statistics and individual results for each parameter setting. - Compute optimistic bound performance, known as virtual best performance, based on the provided experiments. - Perform an exploration-exploitation parameter setting strategy, where the fraction of the allocated resources used in the exploration round is optimized. The exploration procedure is implemented as a random search in the seen parameter settings or a Bayesian-based method known as the tree of parzen and implemented in the package Hyperopt. - Plot the Window sticker, comparing the performance curves corresponding to the virtual best, recommended parameters, and exploration-exploitation parameter setting strategies. - Plots the values of the parameters and their best values with respect to the resource considered, a plot we call the Strategy plot. These plots can show the actual solver parameter values or the meta-parameters associated with parameter-setting strategies.

Installation

Method 1: Cloning the Repository

  1. Clone the Repository: bash git clone https://github.com/usra-riacs/stochastic-benchmark.git cd stochastic-benchmark

  2. Set up a Virtual Environment (Recommended): bash python3 -m venv venv source venv/bin/activate # On Windows use `.\venv\Scripts\activate`

  3. Install Dependencies: bash pip install -r requirements.txt

Method 2: Downloading as a Zip Archive

  1. Download the Repository:

    • Navigate to the stochastic-benchmark GitHub page.
    • Click on the Code button.
    • Choose Download ZIP.
    • Once downloaded, extract the ZIP archive and navigate to the extracted folder in your terminal or command prompt.
  2. Set up a Virtual Environment (Recommended): bash python3 -m venv venv source venv/bin/activate # On Windows use `.\venv\Scripts\activate`

  3. Install Dependencies: bash pip install -r requirements.txt

Examples

For a full demonstration of the stochastic-benchmark analysis in action, refer to the example notebooks located in the examples folder of this repository.

Testing

Tests can be executed using the helper script run_tests.py. Specify the type of tests to run along with any optional flags:

bash python run_tests.py [unit|integration|smoke|all|coverage] [--verbose] [--fast]

Example commands:

  • Run the unit test suite:

bash python run_tests.py unit

  • Generate a coverage report:

bash python run_tests.py coverage

For additional details see TESTING.md.

Contributors

Acknowledgements

This code was developed under the NSF Expeditions Program NSF award CCF-1918549 on Coherent Ising Machines

License

Apache 2.0

Owner

  • Name: usra-riacs
  • Login: usra-riacs
  • Kind: organization

GitHub Events

Total
  • Watch event: 3
  • Push event: 2
  • Pull request event: 3
Last Year
  • Watch event: 3
  • Push event: 2
  • Pull request event: 3

Dependencies

pyproject.toml pypi
requirements.txt pypi
  • cloudpickle ==2.2.0
  • dill ==0.3.5.1
  • fonttools ==4.25.0
  • future ==0.18.3
  • hyperopt ==0.2.7
  • mkl-service ==2.4.0
  • multiprocess ==0.70.13
  • munkres ==1.1.4
  • networkx ==2.8.6
  • numpy ==1.22.0
  • pandas ==1.4.3
  • seaborn ==0.12.0rc0
  • tqdm ==4.66.3
setup.py pypi
.github/workflows/ci.yml actions
  • actions/checkout v4 composite
  • actions/setup-python v4 composite
  • actions/upload-artifact v4 composite
  • codecov/codecov-action v3 composite
requirements-dev.txt pypi
  • pytest * development
  • pytest-cov * development
  • pytest-xdist * development