Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (9.5%) to scientific vocabulary
Keywords
Repository
World's complete programming language benchmark
Basic Info
- Host: GitHub
- Owner: leon0399
- License: mit
- Language: Python
- Default Branch: master
- Homepage: https://leon0399.ru/experiments/benchmarks/
- Size: 1.18 MB
Statistics
- Stars: 4
- Watchers: 2
- Forks: 1
- Open Issues: 5
- Releases: 0
Topics
Metadata Files
README.md
Complete Benchmark
World's complete programming language benchmark.
Results
[!IMPORTANT] This project is not intended to be the sole source for your decisions. Each programming language has its own unique advantages and disadvantages, and performance is only one aspect. Some languages may be more suitable for different projects due to their ecosystem, established best practices, and other factors. Always consider the specific needs and context of your project before making a decision.
See
RESULTS.md
Algorithms
The benchmark suite covers a variety of individual scripts grouped by algorithm family:
- collatz/MaxSequence – find the number below a limit that produces the longest Collatz sequence.
- linpack/Linpack – solve a dense system of linear equations from the LINPACK benchmark.
- mandelbrot/Simple – render an ASCII Mandelbrot set.
- primes/Atkin – generate primes using the Sieve of Atkin.
- primes/Simple – generate primes via straightforward trial division.
- recursion/Tak – compute the Tak function to test deep recursion.
- treap/Naive – exercise inserts and deletes on a treap data structure.
Implementations
The following table shows which languages include implementations of each algorithm.
| Language | collatz/MaxSequence | linpack/Linpack | mandelbrot/Simple | primes/Atkin | primes/Simple | recursion/Tak | treap/Naive | |---|---|---|---|---|---|---|---| | c | ❌ | ❌ | ❌ | ❌ | ✅ | ❌ | ❌ | | c-plus-plus | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | c-sharp | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | fortran | ❌ | ❌ | ❌ | ❌ | ✅ | ❌ | ❌ | | go | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | java | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | javascript | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | kotlin | ❌ | ❌ | ❌ | ❌ | ✅ | ❌ | ✅ | | lua | ❌ | ❌ | ✅ | ❌ | ✅ | ✅ | ✅ | | perl | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ❌ | | php | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | python | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | ruby | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | | rust | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | | swift | ❌ | ❌ | ❌ | ❌ | ✅ | ❌ | ❌ | | zig | ❌ | ❌ | ❌ | ❌ | ✅ | ❌ | ❌ |
Running benchmark
Docker
Run full suite
bash
docker-compose run benchmark python3 ./benchmark.py run --output jsonl=./.results/results.jsonl --output markdown=./RESULTS.md
Run specific languages only
bash
docker-compose run benchmark python3 ./benchmark.py run --lang rust go php --output jsonl=./.results/results.jsonl --output markdown=./RESULTS.md
Run specific scripts only
bash
docker-compose run benchmark python3 ./benchmark.py run --script primes/Simple linpack/Linpack recursion/Tak --output jsonl=./.results/results.jsonl --output markdown=./RESULTS.md
[!TIP] You can combine options above
Running manually
bash
python3 ./benchmark.py
Owner
- Name: Leonid Meleshin
- Login: leon0399
- Kind: user
- Location: Knowhere
- Website: leon0399.ru/
- Twitter: leon0399
- Repositories: 26
- Profile: https://github.com/leon0399
Breathing with new ideas. Perfecting my child: @senseshift
Citation (CITATION.cff)
cff-version: 1.2.0 message: "If you use results, please cite it as below." authors: - family-names: "Meleshin" given-names: "Leonid" orcid: "https://orcid.org/0000-0002-9132-6406" title: "Complete Benchmark" date-released: 2021-09-13 url: "https://github.com/leon0399/benchmarks"
GitHub Events
Total
- Issues event: 6
- Delete event: 5
- Issue comment event: 3
- Push event: 47
- Pull request review event: 1
- Pull request event: 17
- Create event: 8
Last Year
- Issues event: 6
- Delete event: 5
- Issue comment event: 3
- Push event: 47
- Pull request review event: 1
- Pull request event: 17
- Create event: 8
Issues and Pull Requests
Last synced: 4 months ago
All Time
- Total issues: 13
- Total pull requests: 42
- Average time to close issues: about 1 year
- Average time to close pull requests: 12 days
- Total issue authors: 3
- Total pull request authors: 3
- Average comments per issue: 0.77
- Average comments per pull request: 1.21
- Merged pull requests: 21
- Bot issues: 0
- Bot pull requests: 30
Past Year
- Issues: 3
- Pull requests: 9
- Average time to close issues: N/A
- Average time to close pull requests: 6 days
- Issue authors: 3
- Pull request authors: 2
- Average comments per issue: 0.0
- Average comments per pull request: 0.0
- Merged pull requests: 6
- Bot issues: 0
- Bot pull requests: 3
Top Authors
Issue Authors
- leon0399 (10)
- kherolol (1)
- Hyugiya (1)
Pull Request Authors
- sweep-ai[bot] (32)
- leon0399 (17)
- dependabot[bot] (16)
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- actions/cache v2 composite
- actions/checkout v2 composite
- docker/build-push-action v2 composite
- docker/login-action v1 composite
- docker/setup-buildx-action v1 composite
- ${IMAGE} latest build
- ghcr.io/leon0399/benchmarks latest