Science Score: 67.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 3 DOI reference(s) in README -
✓Academic publication links
Links to: zenodo.org -
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (18.2%) to scientific vocabulary
Keywords
benchmarking
feelpp
hidalgo2
Keywords from Contributors
sequences
mesh
scheduling
interactive
optim
agents
hacking
network-simulation
Last synced: 6 months ago
·
JSON representation
·
Repository
Feel++ Benchmarking
Basic Info
- Host: GitHub
- Owner: feelpp
- License: bsd-3-clause
- Language: Python
- Default Branch: master
- Homepage: https://feelpp.github.io/benchmarking/
- Size: 60.5 MB
Statistics
- Stars: 2
- Watchers: 4
- Forks: 1
- Open Issues: 39
- Releases: 9
Topics
benchmarking
feelpp
hidalgo2
Created about 2 years ago
· Last pushed 10 months ago
Metadata Files
Readme
Contributing
License
Code of conduct
Citation
README.adoc
:cpp: C++
:project: feelpp.benchmarking
:reframe: ReFrame
= The {project} Project
[cols="3", options="noborder,autowidth"]
|===
| image:https://zenodo.org/badge/DOI/10.5281/zenodo.15013241.svg[DOI,link=https://doi.org/10.5281/zenodo.15013241]
| image:https://img.shields.io/github/v/release/feelpp/benchmarking.svg[GitHub Release,link=https://github.com/feelpp/benchmarking/releases/latest]
| image:https://img.shields.io/github/actions/workflow/status/feelpp/benchmarking/ci.yml[GitHub Actions Workflow Status,link=https://github.com/feelpp/benchmarking/actions]
|===
The `feelpp.benchmarking` framework is designed to automate and facilitate the benchmarking process for any application. It enables hightly customized and flexible benchmarking, providing a pipeline that guides users from test execution to comprehensive report generation.
The framework is built on top of link:https://reframe-hpc.readthedocs.io/en/stable/index.html[ReFrame-HPC] and leverages a modular configuration system based on three JSON files that define machine-specific options, benchmark parameters, and dashboard report figures.
Key features include:
- Tailored Configuration:
Separate JSON files allow for detailed setup of machine environments, benchmark execution (including executable paths and test parameters), and report formatting. The use of placeholder syntax facilitates easy refactoring and reuse.
- Automated and Reproducible Workflows:
The pipeline validates input configurations, sets up execution environments via ReFrame, and runs benchmarks in a controlled manner. Results are gathered and transformed into comprehensive, reproducible reports.
- Container Integration:
With support for Apptainer (or Singularity), the framework ensures that benchmarks can be executed consistently across various HPC systems.
- Continuous Benchmarking:
The integration with CI/CD pipelines enables ongoing performance tracking. This is especially useful in projects where monitoring application scaling and performance over time is critical.
The feelpp.benchmarking tool serves as a centralized platform for aggregating benchmarking results into a clear dashboard, making it an essential resource for those looking to monitor and enhance HPC application performance.
== Installation
=== Using PyPi
- Use a python virtual environment (Optional)
[source,bash]
----
python3 -m venv .venv
source .venv/bin/activate
----
- Install the package
[source,bash]
----
pip install feelpp-benchmarking
----
=== Using Git
- Clone the repository
[source,bash]
----
git clone https://github.com/feelpp/benchmarking.git
----
- Use a python virtual environment (Optional)
[source,bash]
----
python3 -m venv .venv
source .venv/bin/activate
----
- Install the project and dependencies
[source,bash]
----
python3 -m pip install .
----
== Prerequisites
In order to generate the benchmark reports website, Antora must be configured beforehand, along with the necessary extensions. More information can be found in link:https://docs.antora.org/antora/latest/install-and-run-quickstart/[Antora Documentation]
Make sure you have link:http://nodejs.org/en/download[Node.js] installed.
To initialize the Antora environment, run
[source,bash]
----
feelpp-antora init -d -t -n
----
[NOTE]
====
- The project title will be shown on the webpage, but the project name serves as an ID for your project.
- The project name must not contain spaces or any special character (only underscore is supported).
====
The previous script will do the following:
- Initialize a git repository in if it is not already one.
- Copy and install a `packages.json` file containing necessary dependencies.
- Create the `/docs/modules/ROOT/` directories where rendered files will be located.
- Create an antora component version descriptor and an antora playbook using and
== Quickstart
For detailed documentation, refer to our link:https://bench.feelpp.org/benchmarking/tutorial/index.html[docs].
=== The benchmarked application
The framwork includes a sample C++/MPI application that can be used to get familiar with the framework's core concepts. It can be found under _examples/parallelsum/parallelSum.cpp_, or you can download it here : link:https://github.com/feelpp/benchmarking/blob/master/examples/parallelsum/parallelSum.cpp[parallelSum.cpp]
This Feel++ Benchmarking "Hello World" application will compute the sum of an array distributed across multiple MPI processes. Each process will compute a partial sum, and then it will be summed to get the total sum.
Additionally, the app will measure the time taken to perform the partial sum, and will save it under a _scalability.json_ file.
You can update the sample application and recompile it for a specific config as needed.
[source,bash]
----
mpic++ -std=c++17 -o examples/parallelsum/parallelSum examples/parallelsum/parallelSum.cpp
----
=== Machine configuration files
The framework also contains a default system configuration, located under _examples/machines/default.json_ corresponding to the file _examples/machines/default.py, that looks like this :
[source,json]
----
{
//will use the `default` ReFrame configuration file
"machine": "default",
//will run on the default partition, using the built-in (or local) platform and the default env
"targets":["default:builtin:default"],
//Tests will run asynchronously (up to 4 jobs at a time)
"execution_policy": "async",
// ReFrame's stage and output directories will be located under _./build/reframe/_
"reframe_base_dir":"./build/reframe",
// The generated JSON report will be created under _./reports/_
"reports_base_dir":"./reports/",
// The base directory where the executable is located
"input_dataset_base_dir":"$PWD",
//The C++ app outputs will be stored under the current working directory (./)
"output_app_dir":"$PWD"
}
----
You can download this configuration file here link:https://github.com/feelpp/benchmarking/blob/master/examples/machines/default.json[default.json].
The framework also contains a very basic sample ReFrame configuration file, under _examples/machines/default.py_. link:https://github.com/feelpp/benchmarking/blob/master/examples/machines/default.py[default.py]
This file will cause the tool to use a local scheduler and the `mpiexec` launcher with no options. For more advanced configurations, refer to link:https://reframe-hpc.readthedocs.io/en/stable/config_reference.html#[ReFrame's configuration reference]
More information on _feelpp.benchmarking_ machine configuration files can be found on the documentation link:https://bench.feelpp.org/benchmarking/tutorial/configuration.html#_machine_configuration[Machine configuration]
=== Benchmark configuration files
Along with machine configuration files, users must provide the specifications of the benchmark. A sample file is provided under _examples/parallelsum/parallelSum.json_. link:https://github.com/feelpp/benchmarking/blob/master/examples/parallelsum/parallelSum.json[parallelSum.json]
[source, json]
----
{
//Executable path (Change the location to the actual executable)
"executable": "{{machine.input_dataset_base_dir}}/examples/parallelsum/parallelSum",
"use_case_name": "parallel_sum",
"timeout":"0-0:5:0",
"output_directory": "{{machine.output_app_dir}}/examples/parallelsum/outputs/parallelSum",
//Application options
"options": [ "{{parameters.elements.value}}", "{{output_directory}}/{{instance}}" ],
//Files containing execution times
"scalability": {
"directory": "{{output_directory}}/{{instance}}/",
"stages": [
{
"name":"",
"filepath": "scalability.json",
"format": "json",
"variables_path":"*"
}
]
},
// Resources for the test
"resources":{
"tasks":"{{parameters.tasks.value}}"
},
// Files containing app outputs
"outputs": [
{
"filepath":"{{output_directory}}/{{instance}}/outputs.csv",
"format":"csv"
}
],
// Test validation (Only stdout supported at the moment)
"sanity": { "success": ["[SUCCESS]"], "error": ["[OOPSIE]","Error"] },
// Test parameters
"parameters": [
{
"name": "tasks",
"sequence": [1,2,4]
},
{
"name":"elements",
"linspace":{ "min":100000000, "max":1000000000, "n_steps":4 }
}
]
}
----
[CAUTION]
====
Remember to modify the `executable` path as well as `output_directory` if installing via pip.
====
More information about _feelpp.benchmarking_ benchmark specifications can be found link:https://bench.feelpp.org/benchmarking/tutorial/configuration.html#_benchmark_configuration[here]
=== Plots configuration
Along with the benchmark configuration, a figure configuration file is provided _examples/parallelsum/plots.json_ Download it here link:https://github.com/feelpp/benchmarking/blob/master/examples/parallelsum/plots.json[plots.json].
An example of one figure specification is shown below. Users can add as many figures as they wish, corresponding the figure axis with the parameters used on the benchmark.
[source,json]
----
{
"title": "Absolute performance",
"plot_types": [ "stacked_bar", "grouped_bar" ],
"transformation": "performance",
"variables": [ "computation_time" ],
"names": ["Time"],
"xaxis":{ "parameter":"resources.tasks", "label":"Number of tasks" },
"yaxis":{"label":"Execution time (s)"},
"secondary_axis":{ "parameter":"elements", "label":"N" }
}
----
More information about _feelpp.benchmarking_ figure configuration can be found link:https://bench.feelpp.org/benchmarking/tutorial/configuration.html#_figures[here]
=== Running a benchmark
Finally, to benchmark the test application, generate the reports and plot the figures, run (changing the file paths as needed)
[source,bash]
----
feelpp-benchmarking-exec --machine-config examples/machines/default.json \
--custom-rfm-config examples/machines/default.py \
--benchmark-config examples/parallelsum/parallelSum.json \
--plots-config examples/parallelsum/plots.json \
--website
----
The `--website` option will start an http-server on localhost, so the website can be visualized. Check the console for more information.
[CAUTION]
====
If you installed the framework via PyPi:
- You need to directly download all 5 quickstart files.
- The `--website` option will only work if you have the exact antora setup as this repository.
====
== Usage
=== Executing a benchmark
In order to execute a benchmark, you can make use of the `feelpp-benchmarking-exec` command after all configuration files have been set ( xref:tutorial:configuration.adoc[Configuration Reference]).
The script accepts the following options :
`--machine-config`, (`-mc`)
Path to JSON reframe machine configuration file, specific to a system.
`--plots-config`, (`-pc`) Path to JSON plots configuration file, used to generate figures.
If not provided, no plots will be generated. The plots configuration can also be included in the benchmark configuration file, under the "plots" field.
`--benchmark-config`, (`-bc`)
Paths to JSON benchmark configuration files
In combination with `--dir`, specify only provide basenames for selecting JSON files.
`--custom-rfm-config`, (`-rc`)
Additional reframe configuration file to use instead of built-in ones. It should correspond the with the `--machine-config` specifications.
`--dir`, (`-d`) Name of the directory containing JSON configuration files
`--exclude`, (`-e`) To use in combination with `--dir`, mentioned files will not be launched.
Only provide basenames to exclude.
`--move-results`, (`-mv`) Directory to move the resulting files to.
If not provided, result files will be located under the directory specified by the machine configuration.
`--list-files`, (`-lf`) List all benchmarking configuration file found.
If this option is provided, the application will not run. Use it for validation.
`--verbose`, (`-v`) Select verbose level by specifying multiple v's.
`--help`, (`-h`) Display help and quit program
`--website`, (`-w`) Render reports, compile them and create the website.
`--dry-run` Execute ReFrame in dry-run mode. No tests will run, but the script to execute it will be generated in the stage directory. Config validation will be skipped, although warnings will be raised if bad.
When a benchmark is done, a `website_config.json` file will be created (or updated) with the current filepaths of the reports and plots generated by the framework. If the `--website` flag is active, the `feelpp-benchmarking-render` command will be launched with this file as argument.
=== Rendering reports
To render reports, a webiste configuration file is needed. This file indicates how the website views should be structured, and it indicates the hierarchy of the benchmarks.
A file of the same type is generated after a benchmark is launched, called _website_config.json_, and it is found at the root of the _reports_ directory specified under the `reports_base_dir` field of machine configuration file ( xref:tutorial:configfiles/machine.adoc).
Once this file is located, users can run the `feelpp-benchmarking-render` command to render existing reports.
The script takes the following arguments:
`--config-file` (`-c`): The path of the website configuration file.
`--remote-download-dir` (`-do`): [Optional] Path of the directory to download the reports to. Only relevant if the configuration file contains remote locations (only Girder is supported at the moment).
`--modules-path` (`-m`): [Optional] Path to the Antora module to render the reports to. It defaults to _docs/modules/ROOT/pages_. Multiple directories will be recursively created under the provided path.
`--overview-config` (`-oc`): Path to the overview figure configuration file.
`--plot-configs` (`-pc`): Path the a plot configuration to use for a given benchmark. To be used along with --patch-reports
`--patch-reports` (`-pr`) : Ids of the reports to path, the syntax of the id is machine:application:usecase:date e.g. gaya:feelpp_app:my_use_case:2024_11_05T01_05_32. It is possible to affect all reports in a component by replacing the machine, application, use_case or date by 'all'. Also, one can indicate to patch the latest report by replacing the date by 'latest'. If this option is not provided but plot-configs is, then the latest report will be patched (most recent report date)
`--save-patches` (`-sp`) : If this flag is active, existing plot configurations will be replaced with the ones provided in patch-reports.
`--website` (`-w`) : [Optional] Automatically compite the website and start an http server.
Owner
- Name: Feel++ Consortium
- Login: feelpp
- Kind: organization
- Email: support@feelpp.org
- Location: Strasbourg, France
- Website: http://docs.feelpp.org
- Twitter: feelpp
- Repositories: 135
- Profile: https://github.com/feelpp
Consortium around the Feel++ framework to solve partial differential equations
Citation (CITATION.cff)
cff-version: 1.2.0
message: "If you use Feel++ Benchmarking in your research, please cite it using the following reference."
title: "Feel++ Benchmarking"
abstract: "The feelpp.benchmarking framework automates and facilitates the benchmarking process for any application. It provides a highly customized and flexible pipeline for test execution and comprehensive report generation. Built on ReFrame-HPC, it leverages a modular configuration system and supports container integration for consistent execution across HPC systems."
authors:
- family-names: "Cladellas"
given-names: "Javier"
orcid: "https://orcid.org/0009-0003-8687-7881"
- family-names: "Prud'homme"
given-names: "Christophe"
affiliation: "University of Strasbourg"
orcid: "https://orcid.org/0000-0003-2287-2961"
- family-names: "Chabannes"
given-names: "Vincent"
affiliation: "Feel++ Consortium"
orcid: "https://orcid.org/0009-0005-3602-3524"
version: "3.0.2"
doi: "10.5281/zenodo.15013241"
date-released: "2025-03-12"
url: "https://github.com/feelpp/benchmarking"
keywords:
- "benchmarking"
- "high-performance computing"
- "ReFrame-HPC"
- "automated workflows"
- "container integration"
GitHub Events
Total
- Create event: 104
- Commit comment event: 2
- Release event: 7
- Issues event: 172
- Watch event: 1
- Delete event: 70
- Issue comment event: 120
- Push event: 910
- Pull request event: 199
- Pull request review event: 89
- Pull request review comment event: 16
- Fork event: 1
Last Year
- Create event: 104
- Commit comment event: 2
- Release event: 7
- Issues event: 172
- Watch event: 1
- Delete event: 70
- Issue comment event: 120
- Push event: 910
- Pull request event: 199
- Pull request review event: 89
- Pull request review comment event: 16
- Fork event: 1
Committers
Last synced: about 2 years ago
Top Committers
| Name | Commits | |
|---|---|---|
| Christophe Prud'homme | c****e@c****r | 11 |
| dependabot[bot] | 4****] | 2 |
| github-actions[bot] | g****] | 1 |
Committer Domains (Top 20 + Academic)
cemosis.fr: 1
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 86
- Total pull requests: 141
- Average time to close issues: 25 days
- Average time to close pull requests: 7 days
- Total issue authors: 4
- Total pull request authors: 5
- Average comments per issue: 0.09
- Average comments per pull request: 0.86
- Merged pull requests: 82
- Bot issues: 0
- Bot pull requests: 67
Past Year
- Issues: 86
- Pull requests: 126
- Average time to close issues: 25 days
- Average time to close pull requests: 6 days
- Issue authors: 4
- Pull request authors: 5
- Average comments per issue: 0.09
- Average comments per pull request: 0.84
- Merged pull requests: 80
- Bot issues: 0
- Bot pull requests: 52
Top Authors
Issue Authors
- JavierCladellas (103)
- prudhomm (34)
- tanguypierre (6)
- vincentchabannes (1)
- thomas-saigre (1)
Pull Request Authors
- JavierCladellas (101)
- github-actions[bot] (54)
- dependabot[bot] (53)
- prudhomm (10)
- tanguypierre (4)
- vincentchabannes (2)
Top Labels
Issue Labels
enhancement (80)
2024-m1-perfdashboard (17)
hpc (13)
visualization (12)
benchmarking (12)
ci-cd (11)
report (11)
bug (10)
documentation (8)
github_actions (6)
cb (6)
refactor (6)
i/o (5)
reporting (4)
code-quality (3)
cleanup (2)
data-visualization (2)
review (2)
bug: fte (2)
benchmarking: kub (2)
presentation (2)
data (2)
data-metrics (2)
data-storage (1)
gpu (1)
dependencies (1)
metadata (1)
spack (1)
good first issue (1)
retrospective (1)
Pull Request Labels
enhancement (86)
dependencies (53)
new-benchmark (51)
javascript (43)
github_actions (22)
hpc (19)
benchmarking (18)
bug (17)
ci-cd (17)
visualization (14)
documentation (11)
reporting (10)
cb (10)
data-visualization (6)
refactor (4)
i/o (3)
bug: fte (3)
report (2)
gpu (2)
step: check (2)
cleanup (2)
step: deploy (2)
data-security (2)
benchmarking: feelpp (2)
2024-m1-perfdashboard (2)
data-management (1)
Packages
- Total packages: 1
-
Total downloads:
- pypi 18 last-month
- Total dependent packages: 0
- Total dependent repositories: 0
- Total versions: 6
- Total maintainers: 1
pypi.org: feelpp-benchmarking
The Feel++ Benchmarking Project
- Homepage: https://github.com/feelpp/benchmarking
- Documentation: https://feelpp-benchmarking.readthedocs.io/
- License: MIT License
-
Latest release: 3.0.4
published 11 months ago
Rankings
Dependent packages count: 9.6%
Average: 31.7%
Dependent repos count: 53.9%
Maintainers (1)
Last synced:
6 months ago
Dependencies
.github/workflows/ci.yml
actions
- JamesIves/github-pages-deploy-action v4 composite
- actions/checkout v4 composite
- actions/download-artifact v3 composite
- actions/upload-artifact v3 composite
- docker/build-push-action v5 composite
- docker/login-action v3 composite
- docker/metadata-action v5 composite
- docker/setup-buildx-action v3 composite
- docker/setup-qemu-action v3 composite
- softprops/action-gh-release v1 composite
.github/workflows/init.yml
actions
- actions/checkout v3 composite
- ad-m/github-push-action master composite
Dockerfile
docker
- ghcr.io/feelpp/feelpp jammy build
package-lock.json
npm
- 353 dependencies
package.json
npm
- broken-link-checker ^0.7.8 development
- http-server ^14.1.1 development
- write-good ^1.0.8 development
- @antora/cli ^3.1
- @antora/collector-extension ^1.0.0-alpha.3
- @antora/site-generator ^3.1
- @antora/site-generator-default ^3.1
- @asciidoctor/core ^2.2.6
- @feelpp/asciidoctor-extensions 1.0.0-rc.8
- asciidoctor ^2.2.6
- asciidoctor-jupyter ^0.7.0
- asciidoctor-kroki ^0.17.0
- handlebars ^4.7.8
- handlebars-utils ^1.0.6
pyproject.toml
pypi
requirements.txt
pypi
- build *
- ipykernel *
- numpy *
- pandas *
- pipx *
- plotly *
- pyproject-metadata *
- scikit-build >=0.8.1
- scikit_build_core *
- scipy *
- tabulate *