autotuning_methodology
This software package accompanies the paper "A Methodology for Comparing Auto-Tuning Optimization Algorithms" (https://doi.org/10.1016/j.future.2024.05.021), making the guidelines in the methodology easy to apply.
https://github.com/autotuningassociation/autotuning_methodology
Science Score: 67.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 3 DOI reference(s) in README -
✓Academic publication links
Links to: zenodo.org -
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (15.6%) to scientific vocabulary
Keywords
Repository
This software package accompanies the paper "A Methodology for Comparing Auto-Tuning Optimization Algorithms" (https://doi.org/10.1016/j.future.2024.05.021), making the guidelines in the methodology easy to apply.
Basic Info
- Host: GitHub
- Owner: AutoTuningAssociation
- License: mit
- Language: Python
- Default Branch: main
- Homepage: https://autotuningassociation.github.io/autotuning_methodology/
- Size: 5.76 MB
Statistics
- Stars: 6
- Watchers: 1
- Forks: 3
- Open Issues: 1
- Releases: 7
Topics
Metadata Files
README.md
Autotuning Methodology Software Package
This repository contains the software package accompanying the paper "A Methodology for Comparing Auto-Tuning Optimization Algorithms".
It makes the guidelines in the methodology easy to apply: simply specify the .json file, run autotuning_visualize [path_to_json] and wait for the results!
Limitations & Future Work
Currently, the stable releases of this software package are compatible with Kernel Tuner and KTT, as in the paper. We plan to soon extend this to support more frameworks.
Installation
The package can be installed with pip install autotuning_methodology.
Alternatively, it can be installed by cloning this repository and running pip install . in the root of the cloned project.
Like most Python packages, installing in a virtual environment or with pipx is recommended. Python >= 3.10 is supported.
Notable features
- Official software by the authors of the methodology-defining paper.
- Supports BAT benchmark suite, KTT, and Kernel Tuner.
- Split executer and visualizer to allow running the algorithms on a cluster and visualize locally.
- Caching built-in to avoid duplicate executions.
- Planned support for T1 input and T4 output files.
- Notebook / interactive window mode; if enabled, plots are shown in the notebook / window instead of written to a folder.

Usage
Entry points
There are two entry points defined: autotuning_experiment and autotuning_visualize. Both take one argument: the path to an experiment file (see below).
Input files
To get started, all you need is an experiments file. This is a json file that describes the details of your comparison: which algorithms to use, which programs to tune on which devices, the graphs to output and so on.
You can find the API and an example experiments.json in the documentation.
File references
As we are dealing with input and output files, file references matter.
When calling the entrypoints, we are already providing the path to an experiments file.
File references in experiments files are relative to the location of the experiment file itself.
File references in tuning scripts are relative to the location of the tuning script itself. Tuning scripts need to have the global literals file_path_results and file_path_metadata for this package to know where to get the results.
Plots outputted by this package are placed in a folder called generated_plots relative to the current working directory.
Pipeline
The below schematics show the pipeline implemented by this tool as described in the paper.
The first flowchart shows the tranformation of raw, stochastic optimization algorithm data to a performance curve.
The second flowchart shows the adaption of performance curves of various optimization algorithms and search spaces to the desired output.
Contributing
Setup
If you're looking to contribute to this package: welcome!
Start out by installing with pip install -e .[dev] (this installs the package in editable mode alongside the development dependencies).
During development, unit and integration tests can be ran with pytest.
Black is used as a formatter, and Ruff is used as a linter to check the formatting, import sorting et cetera.
When using Visual Studio Code, use the settings.json found in .vscode to automatically have the correct linting, formatting and sorting during development.
In addition, install the extensions recommended by us by searching for @recommended:workspace in the extensions tab for a better development experience.
Documentation
The documentation can be found here.
Locally, the documentation can be build with make clean html from the docs folder, but the package must have been installed in editable mode with pip install -e ..
Upon pushing to main or publishing a version, this documentation will be built and published to the GitHub Pages.
The Docstring format used is Google. Type hints are to be included in the function signature and therefor omitted from the docstring. In Visual Studio Code, the autoDocstring extension can be used to automatically infer docstrings. When referrring to functions and parameters in the docstring outside of their definition, use double backquotes to be compatible with both MarkDown and ReStructuredText, e.g.: "skipdrawscheck: skips checking that each value in draws is in the dist.".
Tests
Before contributing a pull request, please run nox and ensure it has no errors. This will test against all Python versions explicitely supported by this package, and will check whether the correct formatting has been applied.
Upon submitting a pull request or pushing to main, these same checks will be ran remotely via GitHub Actions.
Publishing
For publising the package to PyPI (the Python Package Index), we use Flit and the to-pypi-using-flit GitHub Action to automate this.
Semantic version numbering is used as follows: MAJOR.Minor.patch.
MAJOR version for incompatible API changes.
Minor version for functionality in a backward compatible manner.
patch version for backward compatible bug fixes.
In addition, PEP 440 is adhered to, specifically for pre-release versioning.
Owner
- Name: AutoTuningAssociation
- Login: AutoTuningAssociation
- Kind: organization
- Repositories: 1
- Profile: https://github.com/AutoTuningAssociation
Citation (CITATION.cff)
cff-version: 1.2.0
title: Autotuning Methodology
message: >-
If you use this software, please cite both the article from preferred-citation and the software itself.
type: software
authors:
- given-names: Floris-Jan
family-names: Willemsen
email: f.q.willemsen@umail.leidenuniv.nl
affiliation: "Leiden University, Netherlands eScience Center"
orcid: "https://orcid.org/0000-0003-2295-8263"
identifiers:
- type: doi
value: 10.5281/zenodo.11207515
description: Zenodo DOI
repository-code: >-
https://github.com/AutoTuningAssociation/autotuning_methodology
url: >-
https://autotuningassociation.github.io/autotuning_methodology/
abstract: >-
This software package accompanies the paper "A Methodology
for Comparing Auto-Tuning Optimization Algorithms", making
the guidelines in the methodology easy to apply.
keywords:
- Auto-tuning
- Methodology
- Optimization Algorithms
- Performance Comparison
- Performance Metrics
- Performance Optimization
license: MIT
preferred-citation:
type: article
title: A Methodology for Comparing Optimization Algorithms for Auto-Tuning
journal: "Future Generation Computer Systems"
year: 2024
abstract: >-
Adapting applications to optimally utilize available hardware is no mean feat: the plethora of choices for optimization techniques are infeasible to maximize manually.
To this end, auto-tuning frameworksauto-tuning frameworks are used to automate this task, which in turn use optimization algorithms to efficiently search the vast search spaces.
However, there is a lack of comparability in studies presenting advances in auto-tuning frameworks and the optimization algorithms incorporated.
As each publication varies in the way experiments are conducted, metrics used, and results reported, comparing the performance of optimization algorithms among publications is infeasible.
The auto-tuning community identified this as a key challenge at the 2022 Lorentz Center workshop on auto-tuning.
The examination of the current state of the practice in this paper further underlines this.
We propose a community-driven methodology composed of four steps regarding experimental setup, tuning budget, dealing with stochasticity, and quantifying performance.
This methodology builds upon similar methodologies in other fields while taking into account the constraints and specific characteristics of the auto-tuning field, resulting in novel techniques.
The methodology is demonstrated in a simple case study that compares the performance of several optimization algorithms used to auto-tune CUDA kernels on a set of modern GPUs.
We provide a software tool to make the application of the methodology easy for authors, and simplifies reproducibility of results.
authors:
- given-names: Floris-Jan
family-names: Willemsen
email: f.q.willemsen@umail.leidenuniv.nl
affiliation: "Leiden University, Netherlands eScience Center"
orcid: "https://orcid.org/0000-0003-2295-8263"
- given-names: Richard
family-names: Schoonhoven
affiliation: Centrum Wiskunde & Informatica
orcid: "https://orcid.org/0000-0003-3659-929X"
- orcid: "https://orcid.org/0000-0002-5703-9673"
given-names: Jiří
family-names: Filipovič
affiliation: Masaryk University
- given-names: Jacob Odgård
family-names: Tørring
orcid: "https://orcid.org/0000-0002-9385-7948"
affiliation: Norwegian University of Science and Technology
- given-names: Rob
name-particle: van
family-names: Nieuwpoort
affiliation: Leiden University
orcid: "https://orcid.org/0000-0002-2947-9444"
- given-names: Ban
name-particle: van
family-names: Werkhoven
orcid: "https://orcid.org/0000-0002-7508-3272"
affiliation: "Leiden University, Netherlands eScience Center"
GitHub Events
Total
- Watch event: 2
- Push event: 64
- Pull request review comment event: 3
- Fork event: 1
- Create event: 1
Last Year
- Watch event: 2
- Push event: 64
- Pull request review comment event: 3
- Fork event: 1
- Create event: 1
Committers
Last synced: about 2 years ago
Top Committers
| Name | Commits | |
|---|---|---|
| fjwillemsen | f****n@i****m | 180 |
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 0
- Total pull requests: 4
- Average time to close issues: N/A
- Average time to close pull requests: 19 days
- Total issue authors: 0
- Total pull request authors: 2
- Average comments per issue: 0
- Average comments per pull request: 0.25
- Merged pull requests: 3
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 1
- Average time to close issues: N/A
- Average time to close pull requests: about 2 months
- Issue authors: 0
- Pull request authors: 1
- Average comments per issue: 0
- Average comments per pull request: 0.0
- Merged pull requests: 1
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
- fjwillemsen (2)
- jhozzova (2)
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- actions/checkout v3 composite
- actions/setup-python v4 composite
- sphinx-notes/pages v3 composite
- AsifArmanRahman/to-pypi-using-flit v1 composite
- actions/checkout v3 composite
- actions/setup-python v4 composite
- jsonschema >= 4.17.3
- kernel_tuner >= 0.4.5
- matplotlib >= 3.7.1
- nonconformist >= 2.1.0
- numpy >= 1.22.4
- progressbar2 >= 4.2.0
- scikit-learn >= 1.0.2
- yappi >= 1.4.0