stochastic-benchmark
Repository for Stochastic Optimization Solvers Benchmark code
Science Score: 36.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org, aps.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (14.0%) to scientific vocabulary
Keywords
Repository
Repository for Stochastic Optimization Solvers Benchmark code
Basic Info
- Host: GitHub
- Owner: usra-riacs
- License: apache-2.0
- Language: Python
- Default Branch: main
- Homepage: https://github.com/usra-riacs/stochastic-benchmark/blob/main/Window_Sticker_Paper_Clean_Copy.pdf
- Size: 339 MB
Statistics
- Stars: 7
- Watchers: 4
- Forks: 4
- Open Issues: 3
- Releases: 0
Topics
Metadata Files
README.md
Window Sticker - Stochastic Benchmark
Repository for Stochastic Optimization Solvers Benchmark code
Repository for Stochastic Optimization Solvers Benchmark implementation of the Window Sticker framework.
The benchmarking approach is described in this preprint titled: Benchmarking the Operation of Quantum Heuristics and Ising Machines: Scoring Parameter Setting Strategies on Optimization Applications.
Details of the implementation and an illustrative example for Wishart instances found here are given in this document.
Table of Contents
Background
This code has been created in order to produce a set of plots that inform the performance of parameterized stochastic optimization solvers when addressing a well-established family of optimization problems. These plots are produced based on experimental data from the execution of such solvers in seen instances of the problem family and evaluated further in an unseen subset of problems. More details of the methodology have been presented in the APS March meeting and INFORMS Annual meeting conferences. A manuscript explaining the methodology is in preparation. The performance plot, or as we like to call it Window Sticker, is a graphical representation of the expected performance of a solution method or parameter setting strategy with an unseen instance from the same problem family that it is generated aiming to answer the question With X% confidence, will we find a solution with Y quality after using R resource? Consider that the quality metric and the resource values can be arbitrary functions of the parameters and performance of the given solver, providing a flexible analysis tool for its performance.
The current package implements the following functionality: - Parsing results from files from parameterized stochastic solvers such as PySA and D-Wave ocean tools. - Through bootstrapping and downsampling, simulate the lower data performance for such solvers. - Compute best-recommended parameters based on aggregated statistics and individual results for each parameter setting. - Compute optimistic bound performance, known as virtual best performance, based on the provided experiments. - Perform an exploration-exploitation parameter setting strategy, where the fraction of the allocated resources used in the exploration round is optimized. The exploration procedure is implemented as a random search in the seen parameter settings or a Bayesian-based method known as the tree of parzen and implemented in the package Hyperopt. - Plot the Window sticker, comparing the performance curves corresponding to the virtual best, recommended parameters, and exploration-exploitation parameter setting strategies. - Plots the values of the parameters and their best values with respect to the resource considered, a plot we call the Strategy plot. These plots can show the actual solver parameter values or the meta-parameters associated with parameter-setting strategies.
Installation
Method 1: Cloning the Repository
Clone the Repository:
bash git clone https://github.com/usra-riacs/stochastic-benchmark.git cd stochastic-benchmarkSet up a Virtual Environment (Recommended):
bash python3 -m venv venv source venv/bin/activate # On Windows use `.\venv\Scripts\activate`Install Dependencies:
bash pip install -r requirements.txt
Method 2: Downloading as a Zip Archive
Download the Repository:
- Navigate to the stochastic-benchmark GitHub page.
- Click on the
Codebutton. - Choose
Download ZIP. - Once downloaded, extract the ZIP archive and navigate to the extracted folder in your terminal or command prompt.
Set up a Virtual Environment (Recommended):
bash python3 -m venv venv source venv/bin/activate # On Windows use `.\venv\Scripts\activate`Install Dependencies:
bash pip install -r requirements.txt
Examples
For a full demonstration of the stochastic-benchmark analysis in action, refer to the example notebooks located in the examples folder of this repository.
Testing
Tests can be executed using the helper script run_tests.py. Specify the type of
tests to run along with any optional flags:
bash
python run_tests.py [unit|integration|smoke|all|coverage] [--verbose] [--fast]
Example commands:
- Run the unit test suite:
bash
python run_tests.py unit
- Generate a coverage report:
bash
python run_tests.py coverage
For additional details see TESTING.md.
Contributors
- @robinabrown Robin Brown
- @PratikSathe Pratik Sathe
- @bernalde David Bernal Neira
Acknowledgements
This code was developed under the NSF Expeditions Program NSF award CCF-1918549 on Coherent Ising Machines
License
Owner
- Name: usra-riacs
- Login: usra-riacs
- Kind: organization
- Repositories: 3
- Profile: https://github.com/usra-riacs
GitHub Events
Total
- Watch event: 3
- Push event: 2
- Pull request event: 3
Last Year
- Watch event: 3
- Push event: 2
- Pull request event: 3
Dependencies
- cloudpickle ==2.2.0
- dill ==0.3.5.1
- fonttools ==4.25.0
- future ==0.18.3
- hyperopt ==0.2.7
- mkl-service ==2.4.0
- multiprocess ==0.70.13
- munkres ==1.1.4
- networkx ==2.8.6
- numpy ==1.22.0
- pandas ==1.4.3
- seaborn ==0.12.0rc0
- tqdm ==4.66.3
- actions/checkout v4 composite
- actions/setup-python v4 composite
- actions/upload-artifact v4 composite
- codecov/codecov-action v3 composite
- pytest * development
- pytest-cov * development
- pytest-xdist * development