SciMLBenchmarks
Scientific machine learning (SciML) benchmarks, AI for science, and (differential) equation solvers. Covers Julia, Python (PyTorch, Jax), MATLAB, R
Science Score: 67.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 2 DOI reference(s) in README -
○Academic publication links
-
✓Committers with academic emails
4 of 62 committers (6.5%) from academic institutions -
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (13.2%) to scientific vocabulary
Keywords
Keywords from Contributors
Repository
Scientific machine learning (SciML) benchmarks, AI for science, and (differential) equation solvers. Covers Julia, Python (PyTorch, Jax), MATLAB, R
Basic Info
- Host: GitHub
- Owner: SciML
- License: mit
- Language: MATLAB
- Default Branch: master
- Homepage: https://docs.sciml.ai/SciMLBenchmarksOutput/stable/
- Size: 186 MB
Statistics
- Stars: 329
- Watchers: 14
- Forks: 101
- Open Issues: 62
- Releases: 4
Topics
Metadata Files
README.md
SciMLBenchmarks.jl: Benchmarks for Scientific Machine Learning (SciML) and Equation Solvers
SciMLBenchmarks.jl holds webpages, pdfs, and notebooks showing the benchmarks for the SciML Scientific Machine Learning Software ecosystem, including:
- Benchmarks of equation solver implementations
- Speed and robustness comparisons of methods for parameter estimation / inverse problems
- Training universal differential equations (and subsets like neural ODEs)
- Training of physics-informed neural networks (PINNs)
- Surrogate comparisons, including radial basis functions, neural operators (DeepONets, Fourier Neural Operators), and more
The SciML Bench suite is made to be a comprehensive open source benchmark from the ground up, covering the methods of computational science and scientific computing all the way to AI for science.
Rules: Optimal, Fair, and Reproducible
These benchmarks are meant to represent good optimized coding style. Benchmarks are preferred to be run on the provided open benchmarking hardware for full reproducibility (though in some cases, such as with language barriers, this can be difficult). Each benchmark is documented with the compute devices used along with package versions for necessary reproduction. These benchmarks attempt to measure in terms of work-precision efficiency, either timing with an approximately matching the error or building work-precision diagrams for direct comparison of speed at given error tolerances.
If any of the code from any of the languages can be improved, please open a pull request.
For critiques of benchmarks, please open a pull request that changes the code in the desired manner. Issues with recommended changes are generally vague and not actionable, while pull requests with code changes are exact. Thus if there is something you think should be changed in the code, please make the recommended change in the code!
Results
To view the results of the SciML Benchmarks, go to benchmarks.sciml.ai. By default, this will lead to the latest tagged version of the benchmarks. To see the in-development version of the benchmarks, go to https://benchmarks.sciml.ai/dev/.
Static outputs in pdf, markdown, and html reside in SciMLBenchmarksOutput.
Citing
To cite the SciML Benchmarks, please cite the following:
```bib @article{rackauckas2019confederated, title={Confederated modular differential equation APIs for accelerated algorithm development and benchmarking}, author={Rackauckas, Christopher and Nie, Qing}, journal={Advances in Engineering Software}, volume={132}, pages={1--6}, year={2019}, publisher={Elsevier} }
@article{DifferentialEquations.jl-2017, author = {Rackauckas, Christopher and Nie, Qing}, doi = {10.5334/jors.151}, journal = {The Journal of Open Research Software}, keywords = {Applied Mathematics}, note = {Exported from https://app.dimensions.ai on 2019/05/05}, number = {1}, pages = {}, title = {DifferentialEquations.jl – A Performant and Feature-Rich Ecosystem for Solving Differential Equations in Julia}, url = {https://app.dimensions.ai/details/publication/pub.1085583166 and http://openresearchsoftware.metajnl.com/articles/10.5334/jors.151/galley/245/download/}, volume = {5}, year = {2017} } ```
Current Summary
The following is a quick summary of the benchmarks. These paint broad strokes over the set of tested equations and some specific examples may differ.
Non-Stiff ODEs
- OrdinaryDiffEq.jl's methods are the most efficient by a good amount
- The
Vernmethods tend to do the best in every benchmark of this category - At lower tolerances,
Tsit5does well consistently. - ARKODE and Hairer's
dopri5/dop853perform very similarly, but are both far less efficient than theVernmethods. - The multistep methods,
CVODE_Adamsandlsoda, tend to not do very well. - The ODEInterface multistep method
ddeabmdoes not do as well as the other multistep methods. - ODE.jl's methods are not able to consistently solve the problems.
- Fixed time step methods are less efficient than the adaptive methods.
Stiff ODEs
- In this category, the best methods are much more problem dependent.
- For smaller problems:
Rosenbrock23,lsoda, andTRBDF2tend to be the most efficient at high tolerances.Rodas4PandRodas5Ptend to be the most efficient at low tolerances.
- For larger problems (Filament PDE):
FBDFandQNDFdo the best at all normal tolerances.- The ESDIRK methods like
TRBDF2andKenCarp4can come close.
radauis always the most efficient when tolerances go to the low extreme (1e-13)- Fixed time step methods tend to diverge on every tested problem because the high stiffness results in divergence of the Newton solvers.
- ARKODE is very inconsistent and requires a lot of tweaking in order to not
diverge on many of the tested problems. When it doesn't diverge, the similar
algorithms in OrdinaryDiffEq.jl (
KenCarp4) are much more efficient in most cases. - GeometricIntegrators.jl fails to converge on any of the tested problems.
Dynamical ODEs
- Higher order (generally order >=6) symplectic integrators are much more efficient than the lower order counterparts.
- For high accuracy, using a symplectic integrator is not preferred. Their extra cost is not necessary since the other integrators are able to not drift simply due to having low enough error.
- In this class, the
DPRKNmethods are by far the most efficient. TheVernmethods do well for not being specific to the domain.
Non-Stiff SDEs
- For simple 1-dimensional SDEs at low accuracy, the
EMandRKMilmethods can do well. Beyond that, they are simply outclassed. - The
SRAandSRImethods both are very similar within-class on the simple SDEs. SRA3is the most efficient when applicable and the tolerances are low.- Generally, only low accuracy is necessary to get to sampling error of the mean.
- The adaptive method is very conservative with error estimates.
Stiff SDEs
- The high order adaptive methods (
SRIW1) generally do well on stiff problems. - The "standard" low-order implicit methods,
ImplicitEMandImplicitRK, do not do well on all stiff problems. Some exceptions apply to well-behaved problems like the Stochastic Heat Equation.
Non-Stiff DDEs
- The efficiency ranking tends to match the ODE Tests, but the cutoff from low to high tolerance is lower.
Tsit5does well in a large class of problems here.- The
Vernmethods do well in low tolerance cases.
Stiff DDEs
- The Rosenbrock methods, specifically
Rodas5P, perform well.
Parameter Estimation
- Broadly two different approaches have been used, Bayesian Inference and Optimisation algorithms.
- In general it seems that the optimisation algorithms perform more accurately but that can be attributed to the larger number of data points being used in the optimisation cases, Bayesian approach tends to be slower of the two and hence lesser data points are used, accuracy can increase if proper data is used.
- Within the different available optimisation algorithms, BBO from the BlackBoxOptim package and GNCRS2LM for the global case while LDSLSQP,LNBOBYQA and LN_NELDERMEAD for the local case from the NLopt package perform the best.
- Another algorithm being used is the QuadDIRECT algorithm, it gives very good results in the shorter problem case but doesn't do very well in the case of the longer problems.
- The choice of global versus local optimization make a huge difference in the timings. BBO tends to find the correct solution for a global optimization setup. For local optimization, most methods in NLopt, like :LN_BOBYQA, solve the problem very fast but require a good initial condition.
- The different backends options available for Bayesian method offer some tradeoffs between time, accuracy and control. It is observed that sufficiently high accuracy can be observed with any of the backends with the fine tuning of stepsize, constraints on the parameters, tightness of the priors and number of iterations being passed.
Interactive Notebooks
To generate the interactive notebooks, first install the SciMLBenchmarks, instantiate the
environment, and then run SciMLBenchmarks.open_notebooks(). This looks as follows:
julia
]add SciMLBenchmarks#master
]activate SciMLBenchmarks
]instantiate
using SciMLBenchmarks
SciMLBenchmarks.open_notebooks()
The benchmarks will be generated at your pwd() in a folder called generated_notebooks.
Note that when running the benchmarks, the packages are not automatically added. Thus you will need to add the packages manually or use the internal Project/Manifest tomls to instantiate the correct packages. This can be done by activating the folder of the benchmarks. For example,
julia
using Pkg
Pkg.activate(joinpath(pkgdir(SciMLBenchmarks),"benchmarks","NonStiffODE"))
Pkg.instantiate()
will add all of the packages required to run any benchmark in the NonStiffODE folder.
Contributing
All of the files are generated from the Weave.jl files in the benchmarks folder of the SciMLBenchmarks.jl repository. The generation process runs automatically,
and thus one does not necessarily need to test the Weave process locally. Instead, simply open a PR that adds/updates a
file in the benchmarks folder and the PR will generate the benchmark on demand. Its artifacts can then be inspected in the
Buildkite as described below before merging. Note that it will use the Project.toml and Manifest.toml of the subfolder, so
any changes to dependencies requires that those are updated.
Reporting Bugs and Issues
Report any bugs or issues at the SciMLBenchmarks repository.
Inspecting Benchmark Results
To see benchmark results before merging, click into the BuildKite, click onto Artifacts, and then investigate the trained results.
Manually Generating Files
All of the files are generated from the Weave.jl files in the benchmarks folder. To run the generation process, do for example:
julia
]activate SciMLBenchmarks # Get all of the packages
using SciMLBenchmarks
SciMLBenchmarks.weave_file(joinpath(pkgdir(SciMLBenchmarks),"benchmarks","NonStiffODE"),"linear_wpd.jmd")
To generate all of the files in a folder, for example, run:
julia
SciMLBenchmarks.weave_folder(joinpath(pkgdir(SciMLBenchmarks),"benchmarks","NonStiffODE"))
To generate all of the notebooks, do:
julia
SciMLBenchmarks.weave_all()
Each of the benchmarks displays the computer characteristics at the bottom of the benchmark. Since performance-necessary computations are normally performed on compute clusters, the official benchmarks use a workstation with an AMD EPYC 7502 32-Core Processor @ 2.50GHz to match the performance characteristics of a standard node in a high performance computing (HPC) cluster or cloud computing setup.
Choosing a Reference Solution
For almost all equations, there is no analytical solution. A low tolerance reference solution is required in order to compute the error. However, there are many questions as to the potential of biasing the results via a reference computed from a given program. If we use a reference solution from Julia, does that make our errors lower?
The answer is no because all of the equation solvers should be convergent to the same solution. Because of this, it does not matter which solver is used to generate the reference solution. However, caution is required to ensure that the reference solution is sufficiently accurate.
Thankfully, there's a very clear indicator of when a reference solution is not sufficiently correct. Because all other methods will be converging to a different solution, there will be a digit of accuracy at which all other solutions stop converging to the reference. If this occurs, all solutions will give a straight line, you can see there here:

In this image (taken from the TransistorAmplifierDAE benchmark), the second Rodas5P and Rodas4 are from a different problem implementation, and you can see they hit lower errors. But all of the others use the same reference solution and seem to "hit a wall" at around 1e-5. This is because the chosen reference solution was only 1e-5 accurate. Changing to a different reference solution makes them all converge:

This shows that all that truly matters is that the chosen reference is sufficiently accurate, and any walling behavior is an indicator that some method in the benchmark set is more accurate than the reference (in which case the benchmark should be updated to use the more accurate reference).
Owner
- Name: SciML Open Source Scientific Machine Learning
- Login: SciML
- Kind: organization
- Email: contact@chrisrackauckas.com
- Website: https://sciml.ai
- Twitter: SciML_Org
- Repositories: 170
- Profile: https://github.com/SciML
Open source software for scientific machine learning
Citation (CITATION.bib)
@article{rackauckas2019confederated,
title={Confederated modular differential equation APIs for accelerated algorithm development and benchmarking},
author={Rackauckas, Christopher and Nie, Qing},
journal={Advances in Engineering Software},
volume={132},
pages={1--6},
year={2019},
publisher={Elsevier}
}
@article{DifferentialEquations.jl-2017,
author = {Rackauckas, Christopher and Nie, Qing},
doi = {10.5334/jors.151},
journal = {The Journal of Open Research Software},
keywords = {Applied Mathematics},
note = {Exported from https://app.dimensions.ai on 2019/05/05},
number = {1},
pages = {},
title = {DifferentialEquations.jl – A Performant and Feature-Rich Ecosystem for Solving Differential Equations in Julia},
url = {https://app.dimensions.ai/details/publication/pub.1085583166 and http://openresearchsoftware.metajnl.com/articles/10.5334/jors.151/galley/245/download/},
volume = {5},
year = {2017}
}
GitHub Events
Total
- Issues event: 15
- Watch event: 15
- Delete event: 187
- Issue comment event: 339
- Push event: 231
- Pull request review event: 77
- Pull request review comment event: 73
- Pull request event: 466
- Fork event: 21
- Create event: 187
Last Year
- Issues event: 15
- Watch event: 15
- Delete event: 187
- Issue comment event: 339
- Push event: 231
- Pull request review event: 77
- Pull request review comment event: 73
- Pull request event: 466
- Fork event: 21
- Create event: 187
Committers
Last synced: 8 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Chris Rackauckas | a****s@c****m | 822 |
| CompatHelper Julia | c****y@j****g | 370 |
| github-actions[bot] | 4****] | 111 |
| Avik Pal | a****l@m****u | 89 |
| Torkel | t****n@g****m | 83 |
| paramthakkar123 | p****4@g****m | 67 |
| Vaibhav Dixit | v****t@g****m | 54 |
| Venkateshprasad | v****k@g****m | 51 |
| Sebastian Micluța-Câmpeanu | m****5@g****m | 48 |
| Elliot Saba | s****t@g****m | 46 |
| xtalax | a****y@g****m | 42 |
| HAO HAO | H****7@i****k | 39 |
| Chris Elrod | e****c@g****m | 36 |
| gzagatti | g****i | 36 |
| anastasia21112 | a****2@g****m | 36 |
| Sam Isaacson | i****s | 33 |
| github-actions[bot] | a****s@g****m | 28 |
| ErikQQY | 2****3@q****m | 27 |
| Utkarsh | r****0@g****m | 23 |
| vyudu | v****n@g****m | 21 |
| Anant Thazhemadam | a****m@g****m | 20 |
| KirillZubov | k****3@g****m | 20 |
| MRIDUL JAIN | 1****l | 19 |
| Vasily Ilin | v****7@g****m | 17 |
| Guillaume Dalle | 2****e | 11 |
| David Widmann | d****t@d****e | 10 |
| Chris de Graaf | me@c****v | 9 |
| Yingbo Ma | m****5@g****m | 8 |
| Greg | g****t@g****m | 8 |
| oscardssmith | o****h@g****m | 7 |
| and 32 more... | ||
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 46
- Total pull requests: 1,132
- Average time to close issues: almost 2 years
- Average time to close pull requests: about 2 months
- Total issue authors: 20
- Total pull request authors: 51
- Average comments per issue: 2.24
- Average comments per pull request: 1.11
- Merged pull requests: 636
- Bot issues: 3
- Bot pull requests: 758
Past Year
- Issues: 4
- Pull requests: 481
- Average time to close issues: 22 days
- Average time to close pull requests: about 1 month
- Issue authors: 4
- Pull request authors: 24
- Average comments per issue: 1.25
- Average comments per pull request: 1.16
- Merged pull requests: 212
- Bot issues: 0
- Bot pull requests: 352
Top Authors
Issue Authors
- ChrisRackauckas (24)
- github-actions[bot] (5)
- chriselrod (2)
- killah-t-cell (2)
- avik-pal (2)
- baggepinnen (1)
- JuliaTagBot (1)
- yewalenikhil65 (1)
- ArnoStrouwen (1)
- GodotMisogi (1)
- Gregliest (1)
- ljuszkie (1)
- 1-Bart-1 (1)
- ParamThakkar123 (1)
- nathanaelbosch (1)
Pull Request Authors
- github-actions[bot] (823)
- ChrisRackauckas (136)
- avik-pal (54)
- ParamThakkar123 (23)
- TorkelE (19)
- ErikQQY (17)
- Vaibhavdixit02 (16)
- gdalle (14)
- anastasia21112 (12)
- chriselrod (11)
- AayushSabharwal (10)
- Spinachboul (10)
- thazhemadam (10)
- ChrisRackauckas-Claude (7)
- dependabot[bot] (6)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 3
-
Total downloads:
- julia 4 total
-
Total dependent packages: 0
(may contain duplicates) -
Total dependent repositories: 0
(may contain duplicates) - Total versions: 12
proxy.golang.org: github.com/sciml/scimlbenchmarks.jl
- Documentation: https://pkg.go.dev/github.com/sciml/scimlbenchmarks.jl#section-documentation
- License: mit
-
Latest release: v0.1.3
published over 2 years ago
Rankings
proxy.golang.org: github.com/SciML/SciMLBenchmarks.jl
- Documentation: https://pkg.go.dev/github.com/SciML/SciMLBenchmarks.jl#section-documentation
- License: mit
-
Latest release: v0.1.3
published over 2 years ago
Rankings
juliahub.com: SciMLBenchmarks
Scientific machine learning (SciML) benchmarks, AI for science, and (differential) equation solvers. Covers Julia, Python (PyTorch, Jax), MATLAB, R
- Homepage: https://docs.sciml.ai/SciMLBenchmarksOutput/stable/
- Documentation: https://docs.juliahub.com/General/SciMLBenchmarks/stable/
- License: MIT
-
Latest release: 0.1.3
published over 2 years ago
Rankings
Dependencies
- actions/checkout v2 composite
- JuliaRegistries/TagBot v1 composite
- actions/checkout v2 composite
- julia-actions/setup-julia v1 composite
- actions/checkout v4 composite
- crate-ci/typos v1.16.23 composite