BenchmarkPlots

A benchmarking framework for the Julia language

https://github.com/juliaci/benchmarktools.jl

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
    7 of 68 committers (10.3%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (18.9%) to scientific vocabulary

Keywords

benchmark julia julia-language

Keywords from Contributors

graphics optim julialang optimisation unconstrained-optimisation unconstrained-optimization flux integration the-human-brain finite-elements
Last synced: 6 months ago · JSON representation ·

Repository

A benchmarking framework for the Julia language

Basic Info
  • Host: GitHub
  • Owner: JuliaCI
  • License: other
  • Language: Julia
  • Default Branch: main
  • Homepage:
  • Size: 1.68 MB
Statistics
  • Stars: 650
  • Watchers: 7
  • Forks: 102
  • Open Issues: 84
  • Releases: 33
Topics
benchmark julia julia-language
Created almost 10 years ago · Last pushed 6 months ago
Metadata Files
Readme License Citation

README.md

BenchmarkTools.jl

BenchmarkTools logo

Build Status Code Coverage Code Style: Blue Aqua QA

BenchmarkTools makes performance tracking of Julia code easy by supplying a framework for writing and running groups of benchmarks as well as comparing benchmark results.

This package is used to write and run the benchmarks found in BaseBenchmarks.jl.

The CI infrastructure for automated performance testing of the Julia language is not in this package, but can be found in Nanosoldier.jl.

Installation

BenchmarkTools is a   Julia Language   package. To install BenchmarkTools, please open Julia's interactive session (known as REPL) and press ] key in the REPL to use the package mode, then type the following command

julia pkg> add BenchmarkTools

Documentation

If you're just getting started, check out the manual for a thorough explanation of BenchmarkTools.

If you want to explore the BenchmarkTools API, see the reference document.

If you want a short example of a toy benchmark suite, see the sample file in this repo (benchmark/benchmarks.jl).

If you want an extensive example of a benchmark suite being used in the real world, you can look at the source code of BaseBenchmarks.jl.

If you're benchmarking on Linux, I wrote up a series of tips and tricks to help eliminate noise during performance tests.

Quick Start

The primary macro provided by BenchmarkTools is @benchmark:

```julia julia> using BenchmarkTools

The setup expression is run once per sample, and is not included in the

timing results. Note that each sample can require multiple evaluations

benchmark kernel evaluations. See the BenchmarkTools manual for details.

julia> @benchmark sort(data) setup=(data=rand(10)) BenchmarkTools.Trial: 10000 samples with 972 evaluations. Range (min … max): 69.399 ns … 1.066 μs ┊ GC (min … max): 0.00% … 0.00% Time (median): 83.850 ns ┊ GC (median): 0.00% Time (mean ± σ): 89.471 ns ± 53.666 ns ┊ GC (mean ± σ): 3.25% ± 5.16%

      ▁▄▇█▇▆▃▁                                                 

▂▁▁▂▂▃▄▆████████▆▅▄▃▃▃▃▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂ 69.4 ns Histogram: frequency by time 145 ns (top 1%)

Memory estimate: 160 bytes, allocs estimate: 1. ```

For quick sanity checks, one can use the @btime macro, which is a convenience wrapper around @benchmark whose output is analogous to Julia's built-in @time macro:

```julia

The seconds expression helps set a rough time budget, see Manual for more explanation

julia> @btime sin(x) setup=(x=rand()) seconds=3 4.361 ns (0 allocations: 0 bytes) 0.49587200950472454 ```

If the expression you want to benchmark depends on external variables, you should use $ to "interpolate" them into the benchmark expression to avoid the problems of benchmarking with globals. Essentially, any interpolated variable $x or expression $(...) is "pre-computed" before benchmarking begins, and passed to the benchmark as a function argument:

```julia julia> A = rand(3,3);

julia> @btime inv($A); # we interpolate the global variable A with $A 1.191 μs (10 allocations: 2.31 KiB)

julia> @btime inv($(rand(3,3))); # interpolation: the rand(3,3) call occurs before benchmarking 1.192 μs (10 allocations: 2.31 KiB)

julia> @btime inv(rand(3,3)); # the rand(3,3) call is included in the benchmark time 1.295 μs (11 allocations: 2.47 KiB) ```

Sometimes, inline values in simple expressions can give the compiler more information than you intended, causing it to "cheat" the benchmark by hoisting the calculation out of the benchmark code julia julia> @btime 1 + 2 0.024 ns (0 allocations: 0 bytes) 3 As a rule of thumb, if a benchmark reports that it took less than a nanosecond to perform, this hoisting probably occurred. You can avoid this using interpolation: ```julia julia> a = 1; b = 2 2

julia> @btime $a + $b 1.277 ns (0 allocations: 0 bytes) 3 ```

As described in the manual, the BenchmarkTools package supports many other features, both for additional output and for more fine-grained control over the benchmarking process.

Why does this package exist?

Our story begins with two packages, "Benchmarks" and "BenchmarkTrackers". The Benchmarks package implemented an execution strategy for collecting and summarizing individual benchmark results, while BenchmarkTrackers implemented a framework for organizing, running, and determining regressions of groups of benchmarks. Under the hood, BenchmarkTrackers relied on Benchmarks for actual benchmark execution.

For a while, the Benchmarks + BenchmarkTrackers system was used for automated performance testing of Julia's Base library. It soon became apparent that the system suffered from a variety of issues:

  1. Individual sample noise could significantly change the execution strategy used to collect further samples.
  2. The estimates used to characterize benchmark results and to detect regressions were statistically vulnerable to noise (i.e. not robust).
  3. Different benchmarks have different noise tolerances, but there was no way to tune this parameter on a per-benchmark basis.
  4. Running benchmarks took a long time - an order of magnitude longer than theoretically necessary for many functions.
  5. Using the system in the REPL (for example, to reproduce regressions locally) was often cumbersome.

The BenchmarkTools package is a response to these issues, designed by examining user reports and the benchmark data generated by the old system. BenchmarkTools offers the following solutions to the corresponding issues above:

  1. Benchmark execution parameters are configured separately from the execution of the benchmark itself. This means that subsequent experiments are performed more consistently, avoiding branching "substrategies" based on small numbers of samples.
  2. A variety of simple estimators are supported, and the user can pick which one to use for regression detection.
  3. Noise tolerance has been made a per-benchmark configuration parameter.
  4. Benchmark configuration parameters can be easily cached and reloaded, significantly reducing benchmark execution time.
  5. The API is simpler, more transparent, and overall easier to use.

Acknowledgements

This package was authored primarily by Jarrett Revels (@jrevels). Additionally, I'd like to thank the following people:

  • John Myles White, for authoring the original Benchmarks package, which greatly inspired BenchmarkTools
  • Andreas Noack, for statistics help and investigating weird benchmark time distributions
  • Oscar Blumberg, for discussions on noise robustness
  • Jiahao Chen, for discussions on error analysis

Owner

  • Name: Julia CI (Continuous Integration)
  • Login: JuliaCI
  • Kind: organization

Continuous integration (CI) support for the Julia programming language

Citation (CITATION.bib)

@ARTICLE{BenchmarkTools.jl-2016,
  author =	 {{Chen}, Jiahao and {Revels}, Jarrett},
  title =	 "{Robust benchmarking in noisy environments}",
  journal =	 {arXiv e-prints},
  keywords =	 {Computer Science - Performance, 68N30, B.8.1, D.2.5},
  year =	 2016,
  month =	 "Aug",
  eid =		 {arXiv:1608.04295},
  archivePrefix ={arXiv},
  eprint =	 {1608.04295},
  primaryClass = {cs.PF},
  adsurl =	 {https://ui.adsabs.harvard.edu/abs/2016arXiv160804295C},
  adsnote =	 {Provided by the SAO/NASA Astrophysics Data System}
}

GitHub Events

Total
  • Create event: 4
  • Release event: 1
  • Issues event: 16
  • Watch event: 37
  • Delete event: 4
  • Issue comment event: 35
  • Push event: 21
  • Pull request review comment event: 1
  • Pull request review event: 7
  • Pull request event: 22
  • Fork event: 6
Last Year
  • Create event: 4
  • Release event: 1
  • Issues event: 16
  • Watch event: 37
  • Delete event: 4
  • Issue comment event: 35
  • Push event: 21
  • Pull request review comment event: 1
  • Pull request review event: 7
  • Pull request event: 22
  • Fork event: 6

Committers

Last synced: 9 months ago

All Time
  • Total Commits: 349
  • Total Committers: 68
  • Avg Commits per committer: 5.132
  • Development Distribution Score (DDS): 0.596
Past Year
  • Commits: 33
  • Committers: 7
  • Avg Commits per committer: 4.714
  • Development Distribution Score (DDS): 0.394
Top Committers
Name Email Commits
Jarrett Revels j****s@g****m 141
Willow Ahrens w****w@c****u 22
Valentin Churavy v****y 19
Guillaume Dalle 2****e 15
Jameson Nash v****h@g****m 12
Roger-Luo r****8@g****m 11
Alex Arslan a****n@c****t 9
Zentrik Z****k 9
singularitti s****i@o****m 8
Tim Holy t****y@g****m 7
dependabot[bot] 4****] 7
Mosè Giordano m****e@g****g 5
Takafumi Arakaki a****f@g****m 5
Steven G. Johnson s****j@m****u 5
Kristoffer Carlsson k****9@g****m 4
Fredrik Ekre f****e@c****e 3
TEC t****c@t****m 3
milesfrain m****n 3
Vincent Yu v@v****m 3
Tim Besard t****d@g****m 3
Dilum Aluthge d****m@a****m 2
Lilith Orion Hafner 6****r 2
Jerry Ling p****n@j****v 2
Keno Fischer k****o@j****m 2
Mus m****m@o****m 2
Pietro Monticone 3****e 2
Tony Kelman t****y@k****t 2
Fons van der Plas f****s@g****m 1
Felix Benning f****g@g****m 1
Federico Stra s****o@g****m 1
and 38 more...

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 132
  • Total pull requests: 97
  • Average time to close issues: over 1 year
  • Average time to close pull requests: about 1 month
  • Total issue authors: 107
  • Total pull request authors: 46
  • Average comments per issue: 3.36
  • Average comments per pull request: 3.99
  • Merged pull requests: 67
  • Bot issues: 0
  • Bot pull requests: 8
Past Year
  • Issues: 10
  • Pull requests: 16
  • Average time to close issues: 1 minute
  • Average time to close pull requests: 24 days
  • Issue authors: 10
  • Pull request authors: 10
  • Average comments per issue: 0.5
  • Average comments per pull request: 0.44
  • Merged pull requests: 8
  • Bot issues: 0
  • Bot pull requests: 1
Top Authors
Issue Authors
  • LilithHafner (5)
  • StefanKarpinski (4)
  • jrevels (4)
  • gdalle (4)
  • willow-ahrens (3)
  • filchristou (2)
  • Arkoniak (2)
  • Moelf (2)
  • vchuravy (2)
  • chunjiw (2)
  • adienes (2)
  • gustaphe (2)
  • KristofferC (2)
  • bvdmitri (2)
  • giordano (2)
Pull Request Authors
  • Zentrik (17)
  • gdalle (15)
  • dependabot[bot] (10)
  • willow-ahrens (8)
  • Priynsh (6)
  • vchuravy (6)
  • timholy (4)
  • giordano (3)
  • gustaphe (3)
  • fredrikekre (2)
  • vtjnash (2)
  • KlausC (2)
  • milesfrain (2)
  • bclrk (2)
  • LilithHafner (2)
Top Labels
Issue Labels
enhancement (37) bug (21) documentation (18) wontfix (4) question (4) help wanted (2) invalid (1)
Pull Request Labels
dependencies (10) enhancement (9) merge-me (2) needs tests (1) github_actions (1)

Packages

  • Total packages: 4
  • Total downloads:
    • julia 9,819 total
  • Total dependent packages: 148
    (may contain duplicates)
  • Total dependent repositories: 303
    (may contain duplicates)
  • Total versions: 105
juliahub.com: BenchmarkTools

A benchmarking framework for the Julia language

  • Versions: 22
  • Dependent Packages: 148
  • Dependent Repositories: 303
  • Downloads: 9,808 Total
Rankings
Dependent repos count: 0.1%
Dependent packages count: 0.6%
Average: 0.7%
Forks count: 1.0%
Stargazers count: 1.2%
Last synced: 6 months ago
proxy.golang.org: github.com/JuliaCI/BenchmarkTools.jl
  • Versions: 41
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent packages count: 5.5%
Average: 5.6%
Dependent repos count: 5.8%
Last synced: 6 months ago
proxy.golang.org: github.com/juliaci/benchmarktools.jl
  • Versions: 41
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent packages count: 5.5%
Average: 5.6%
Dependent repos count: 5.8%
Last synced: 6 months ago
juliahub.com: BenchmarkPlots

A benchmarking framework for the Julia language

  • Versions: 1
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 11 Total
Rankings
Stargazers count: 0.9%
Forks count: 1.0%
Dependent repos count: 9.9%
Average: 12.7%
Dependent packages count: 38.9%
Last synced: 6 months ago