benchmarking-tool

⏱️ Linux command-line Benchmarking Tool

https://github.com/tdulcet/benchmarking-tool

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
    1 of 1 committers (100.0%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.4%) to scientific vocabulary
Last synced: 7 months ago · JSON representation ·

Repository

⏱️ Linux command-line Benchmarking Tool

Basic Info
  • Host: GitHub
  • Owner: tdulcet
  • License: mit
  • Language: Shell
  • Default Branch: main
  • Homepage:
  • Size: 97.7 KB
Statistics
  • Stars: 2
  • Watchers: 1
  • Forks: 1
  • Open Issues: 2
  • Releases: 0
Created over 5 years ago · Last pushed about 1 year ago
Metadata Files
Readme License Citation

README.md

Benchmarking Tool

Linux command-line Benchmarking Tool

Copyright © 2020 Teal Dulcet

A port of the hyperfine Benchmarking Tool to Bash.

  • Does NOT require installing Rust, downloading dependencies or compiling anything.
  • Includes the same features (except Linux only), produces the same output (with some improvements) and supports most of the same command line options.
  • Outputs most of the numbers with greater precision and outputs more information.
  • Supports outputting in ASCII only (no Unicode characters) to support older terminals.
  • Slightly faster when interactive output (the progress bar) is disabled, as it does not need to launch intermediate shells.

❤️ Please visit tealdulcet.com to support this script and my other software development.

Benchmark of the GNU factor and uutils factor commands.

Also see the Testing and Benchmarking scripts.

Usage

Run: ./time.sh [OPTIONS] <command(s)>...\ All the options can also be set by opening the script in an editor and setting the variables at the top. See Help below for full usage information.

  1. Download the script (time.sh). Run: wget https://raw.github.com/tdulcet/Benchmarking-Tool/main/time.sh.
  2. Execute the script once to make sure there are no errors. For example, run: chmod u+x time.sh and ./time.sh 'sleep 0.3'.
  3. If you want the script to be available for all users, install it. Run: sudo cp time.sh /usr/local/bin/benchmark and sudo chmod +x /usr/local/bin/benchmark.

Help

``` $ benchmark -h Usage: benchmark [OPTION(S)] ... or: benchmark

Options: -w Warmup Perform NUM warmup runs before the actual benchmark. This can be used to fill (disk) caches for I/O-heavy programs. Default: 0 -m Min-runs Perform at least NUM runs for each command. Default: 10 -M Max-runs Perform at most NUM runs for each command. Default: no limit -r Runs Perform exactly NUM runs for each command. If this option is not specified, it will automatically determines the number of runs. -s Setup Execute command before each set of runs. This is useful for compiling your software with the provided parameters, or to do any other work that should happen once before a series of benchmark runs, not every time as would happen with the prepare option. -p Prepare Execute command before each run. This is useful for clearing disk caches, for example. The prepare option can be specified once for all commands or multiple times, once for each command. In the latter case, each preparation command will be run prior to the corresponding benchmark command. -f Conclude Execute command after each timing run. This is useful for killing long-running processes started (e.g. a web server started in prepare), for example. The conclude option can be specified once for all commands or multiple times, once for each command. In the latter case, each conclude command will be run after the corresponding benchmark command. -c Cleanup Execute command after the completion of all benchmarking runs for each individual command to be benchmarked. This is useful if the commands to be benchmarked produce artifacts that need to be cleaned up. -i Ignore-failure Ignore non-zero exit codes of the benchmarked programs. -u ASCII Do not use Unicode characters in output. -N No color Do not use color in output. -S Disable interactive Disable interactive output and progress bars. -C Export CSV Export the timing summary statistics as CSV to the given FILE. -j Export JSON Export the timing summary statistics and timings of individual runs as JSON to the given FILE. -o Output Control where the output of the benchmark is redirected. can be: null: Redirect both stdout and stderr to '/dev/null' (default). pipe: Feed stdout through a pipe before discarding it and redirect stderr to '/dev/null'. inherit: Output the stdout and stderr. : Write both stdout and stderr to the given FILE. -n Command-name Give a meaningful name to a command. This can be specified multiple times if several commands are benchmarked. -h Display this help and exit -v Output version information and exit

Examples: Basic benchmark $ benchmark 'sleep 0.3'

Benchmark two commands
$ benchmark 'find -iname "*.jpg"' 'fd -e jpg -uu'

Benchmark piped commands
$ benchmark 'seq 0 10000000 | factor' 'seq 0 10000000 | uu-factor'

Warmup runs
$ benchmark -w 3 'grep -R TODO *'

Parameterized benchmark
$ benchmark -p 'make clean' 'make -j '{1..12}
This performs benchmarks for 'make -j 1', 'make -j 2', … 'make -j 12'.

Parameterized benchmark with step size
$ benchmark 'sleep 0.'{3..7..2}
This performs benchmarks for 'sleep 0.3', 'sleep 0.5' and 'sleep 0.7'.

Parameterized benchmark with list
$ benchmark {gcc,clang}' -O3 main.c'
This performs benchmarks for 'gcc -O3 main.c' and 'clang -O3 main.c'.

```

Contributing

Pull requests welcome! Ideas for contributions:

  • Support more of hyperfine's options
  • Add option to use the GNU time command (/usr/bin/time)
  • Add more examples
  • Improve the performance
  • Add tests

Owner

  • Name: Teal Dulcet
  • Login: tdulcet
  • Kind: user
  • Location: Portland, Oregon

👨‍💻 Computer Scientist, BS, CRTGR, MS @Thunderbird Council member

Citation (CITATION.cff)

cff-version: 1.2.0
title: Benchmarking Tool
message: >-
  If you use this software, please cite it using the
  metadata from this file.
type: software
authors:
  - given-names: Teal
    family-names: Dulcet
    orcid: 'https://orcid.org/0009-0008-6616-2631'
repository-code: 'https://github.com/tdulcet/Benchmarking-Tool'
abstract: A Linux command-line benchmarking tool.
license: MIT
version: '1.0'
references:
  - authors:
      - given-names: David
        family-names: Peter
        orcid: 'https://orcid.org/0000-0001-7950-9915'
    title: hyperfine
    type: software

GitHub Events

Total
  • Push event: 2
Last Year
  • Push event: 2

Committers

Last synced: 9 months ago

All Time
  • Total Commits: 13
  • Total Committers: 1
  • Avg Commits per committer: 13.0
  • Development Distribution Score (DDS): 0.0
Past Year
  • Commits: 1
  • Committers: 1
  • Avg Commits per committer: 1.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Teal Dulcet t****t@p****u 13
Committer Domains (Top 20 + Academic)
pdx.edu: 1

Issues and Pull Requests

Last synced: 9 months ago

All Time
  • Total issues: 5
  • Total pull requests: 0
  • Average time to close issues: 2 days
  • Average time to close pull requests: N/A
  • Total issue authors: 1
  • Total pull request authors: 0
  • Average comments per issue: 4.8
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • sharkdp (5)
Pull Request Authors
Top Labels
Issue Labels
enhancement (2) bug (2) question (1) help wanted (1) good first issue (1)
Pull Request Labels