SLmetrics
A high-performance R :package: for supervised and unsupervised machine learning evaluation metrics witten in 'C++'.
Science Score: 26.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (16.8%) to scientific vocabulary
Keywords
Repository
A high-performance R :package: for supervised and unsupervised machine learning evaluation metrics witten in 'C++'.
Basic Info
- Host: GitHub
- Owner: serkor1
- License: gpl-3.0
- Language: C++
- Default Branch: development
- Homepage: https://slmetrics-docs.gitbook.io/v1
- Size: 27.3 MB
Statistics
- Stars: 25
- Watchers: 1
- Forks: 4
- Open Issues: 1
- Releases: 8
Topics
Metadata Files
README.md
{SLmetrics}: Machine learning performance evaluation on steroids 
{SLmetrics} is a lightweight R
package written in C++ and {Rcpp}
for memory-efficient and lightning-fast machine learning performance
evaluation; it’s like using a supercharged
{yardstick} but without the
risk of soft to super-hard deprecations.
{SLmetrics} covers both
regression and classification metrics and provides (almost) the same
array of metrics as
{scikit-learn} and
{PyTorch} all without
{reticulate} and the Python
compile-run-(crash)-debug cycle.
Depending on the mood and alignment of planets
{SLmetrics} stands for
Supervised Learning metrics, or Statistical Learning metrics. If
{SLmetrics} catches on, the
latter will be the core philosophy and include unsupervised learning
metrics. If not, then it will remain a {pkg} for Supervised Learning
metrics, and a sandbox for me to develop my C++ skills.
:rocket: Gettting Started
Below you’ll find instructions to install {SLmetrics} and get started with your first metric, the Root Mean Squared Error (RMSE).
:package: CRAN version
``` r
install latest CRAN build
install.packages("SLmetrics") ```
:books: Basic Usage
Below is a minimal example demonstrating how to compute both unweighted and weighted RMSE.
``` r library(SLmetrics)
actual <- c(10.2, 12.5, 14.1) predicted <- c(9.8, 11.5, 14.2) weights <- c(0.2, 0.5, 0.3)
cat( "Root Mean Squared Error", rmse( actual = actual, predicted = predicted, ), "Root Mean Squared Error (weighted)", weighted.rmse( actual = actual, predicted = predicted, w = weights ), sep = "\n" )
> Root Mean Squared Error
> 0.6244998
> Root Mean Squared Error (weighted)
> 0.7314369
```
That’s all! Now you can explore the rest of this README for in-depth usage, performance comparisons, and more details about {SLmetrics}.
:information_source: Why?
Machine learning can be a complicated task; the steps from feature engineering to model deployment require carefully measured actions and decisions. One low-hanging fruit to simplify this process is performance evaluation.
At its core, performance evaluation is essentially just comparing two vectors - a programmatically and, at times, mathematically trivial step in the machine learning pipeline, but one that can become complicated due to:
- Dependencies and potential deprecations
- Needlessly complex or repetitive arguments
- Performance and memory bottlenecks at scale
{SLmetrics} solves these issues by being:
- Fast: Powered by
C++and {Rcpp} - Memory-efficient: Everything is structured around pointers and references
- Lightweight: Only depends on {Rcpp} and {lattice}
- Simple: S3-based, minimal overhead, and flexible inputs
Performance evaluation should be plug-and-play and “just work” out of the box - there’s no need to worry about quasiquations, dependencies, deprecations, or variations of the same functions relative to their arguments when using {SLmetrics}.
:zap: Performance Comparison
One, obviously, can’t build an R-package on C++ and
{Rcpp} without a proper pissing
contest at the urinals - below is a comparison in execution time and
memory efficiency of two simple cases that any {pkg} should be able to
handle gracefully; computing a 2 x 2 confusion matrix and computing the
RMSE[^1].
:fast_forward: Speed comparison

As shown in the chart, {SLmetrics} maintains consistently low(er) execution times across different sample sizes.
:floppy_disk: Memory-efficiency
Below are the results for garbage collections and total memory allocations when computing a 2×2 confusion matrix (N = 1e7) and RMSE (N = 1e7) [^2]. Notice that {SLmetrics} requires no GC calls for these operations.
| | Iterations | Garbage Collections [gc()] | gc() pr. second | Memory Allocation (MB) | |:---|---:|---:|---:|---:| | {SLmetrics} | 100 | 0 | 0.00 | 0 | | {yardstick} | 100 | 190 | 4.44 | 381 | | {MLmetrics} | 100 | 186 | 4.50 | 381 | | {mlr3measures} | 100 | 371 | 3.93 | 916 |
2 x 2 Confusion Matrix (N = 1e7)
| | Iterations | Garbage Collections [gc()] | gc() pr. second | Memory Allocation (MB) | |:---|---:|---:|---:|---:| | {SLmetrics} | 100 | 0 | 0.00 | 0 | | {yardstick} | 100 | 149 | 4.30 | 420 | | {MLmetrics} | 100 | 15 | 2.00 | 76 | | {mlr3measures} | 100 | 12 | 1.29 | 76 |
RMSE (N = 1e7)
In both tasks, {SLmetrics} remains extremely memory-efficient, even at large sample sizes.
[!IMPORTANT]
From {bench} documentation: Total amount of memory allocated by R while running the expression. Memory allocated outside the R heap, e.g. by
malloc()or new directly is not tracked, take care to avoid misinterpreting the results if running code that may do this.
:information_source: Basic usage
In its simplest form,
{SLmetrics}-functions work
directly with pairs of <numeric> vectors (for regression) or
<factor> vectors (for classification). Below we demonstrate this on
two well-known datasets, mtcars (regression) and iris
(classification).
:books: Regression
We first fit a linear model to predict mpg in the mtcars dataset,
then compute the in-sample RMSE:
``` r
Evaluate a linear model on mpg (mtcars)
model <- lm(mpg ~ ., data = mtcars) rmse(mtcars$mpg, fitted(model))
> [1] 2.146905
```
:books: Classification
Now we recode the iris dataset into a binary problem (“virginica”
vs. “others”) and fit a logistic regression. Then we generate predicted
classes, compute the confusion matrix and summarize it.
``` r
1) recode iris
to binary problem
iris$species_num <- as.numeric( iris$Species == "virginica" )
2) fit the logistic
regression
model <- glm( formula = species_num ~ Sepal.Length + Sepal.Width, data = iris, family = binomial( link = "logit" ) )
3) generate predicted
classes
predicted <- factor( as.numeric( predict(model, type = "response") > 0.5 ), levels = c(1,0), labels = c("Virginica", "Others") )
4) generate actual
values as factor
actual <- factor( x = iris$species_num, levels = c(1,0), labels = c("Virginica", "Others") ) ```
``` r
4) generate
confusion matrix
summary( confusion_matrix <- cmatrix( actual = actual, predicted = predicted ) )
> Confusion Matrix (2 x 2)
> ================================================================================
> Virginica Others
> Virginica 35 15
> Others 14 86
> ================================================================================
> Overall Statistics (micro average)
> - Accuracy: 0.81
> - Balanced Accuracy: 0.78
> - Sensitivity: 0.81
> - Specificity: 0.81
> - Precision: 0.81
```
:information_source: Enable OpenMP
[!IMPORTANT]
OpenMP support in {SLmetrics} is experimental. Use it with caution, as performance gains and stability may vary based on your system configuration and workload.
You can control OpenMP usage within
{SLmetrics} using openmp.on()
and openmp.off() . Below are examples demonstrating how to enable and
disable OpenMP:
``` r
enable OpenMP
SLmetrics::openmp.on()
> OpenMP enabled!
disable OpenMP
SLmetrics::openmp.off()
> OpenMP disabled!
```
To illustrate the impact of OpenMP on performance, consider the following benchmarks for calculating entropy on a 1,000,000 x 200 matrix over 100 iterations[^3].
:books: Entropy without OpenMP
| Iterations | Runtime (sec) | Garbage Collections [gc()] | gc() pr. second | Memory Allocation (MB) | |---:|---:|---:|---:|---:| | 100 | 0.86 | 0 | 0 | 0 |
1e6 x 200 matrix without OpenMP
:books: Entropy with OpenMP
| Iterations | Runtime (sec) | Garbage Collections [gc()] | gc() pr. second | Memory Allocation (MB) | |---:|---:|---:|---:|---:| | 100 | 0.15 | 0 | 0 | 0 |
1e6 x 200 matrix with OpenMP
:package: Install from source
Github release
``` r
install github release
pak::pak( pkg = "serkor1/SLmetrics@*release", ask = FALSE ) ```
Nightly build
Clone repository with submodules
console
git clone --recurse-submodules https://github.com/serkor1/SLmetrics.git
Installing with build tools
console
make build
Installing with {pak}
``` r
install nightly build
pak::pak( pkg = ".", ask = FALSE ) ```
:information_source: Code of Conduct
Please note that the {SLmetrics} project is released with a Contributor Code of Conduct. By contributing to this project, you agree to abide by its terms.
[^1]: The source code is available here and here.
[^2]: The source code is available here.
[^3]: The source code is available here.
Owner
- Name: Serkan Korkmaz
- Login: serkor1
- Kind: user
- Location: Denmark
- Company: 1903 Analytics and Consulting
- Website: www.1903analytics.ai
- Repositories: 2
- Profile: https://github.com/serkor1
Chief Economist at 1903 Analytics and Consulting.
CodeMeta (codemeta.json)
{
"@context": "https://doi.org/10.5063/schema/codemeta-2.0",
"@type": "SoftwareSourceCode",
"identifier": "SLmetrics",
"description": " Performance evaluation metrics for supervised and unsupervised machine learning, statistical learning and artificial intelligence applications. Core computations are implemented in 'C++' for scalability and efficiency.",
"name": "SLmetrics: Machine Learning Performance Evaluation on Steroids",
"relatedLink": "https://slmetrics-docs.gitbook.io/v1",
"codeRepository": "https://github.com/serkor1/SLmetrics",
"issueTracker": "https://github.com/serkor1/SLmetrics/issues",
"license": "https://spdx.org/licenses/GPL-3.0",
"version": "0.3.4",
"programmingLanguage": {
"@type": "ComputerLanguage",
"name": "R",
"url": "https://r-project.org"
},
"runtimePlatform": "R version 4.4.3 (2025-02-28)",
"provider": {
"@id": "https://cran.r-project.org",
"@type": "Organization",
"name": "Comprehensive R Archive Network (CRAN)",
"url": "https://cran.r-project.org"
},
"author": [
{
"@type": "Person",
"givenName": "Serkan",
"familyName": "Korkmaz",
"email": "serkor1@duck.com",
"@id": "https://orcid.org/0000-0002-5052-0982"
}
],
"copyrightHolder": [
{
"@type": "Person",
"givenName": "Serkan",
"familyName": "Korkmaz",
"email": "serkor1@duck.com",
"@id": "https://orcid.org/0000-0002-5052-0982"
}
],
"maintainer": [
{
"@type": "Person",
"givenName": "Serkan",
"familyName": "Korkmaz",
"email": "serkor1@duck.com",
"@id": "https://orcid.org/0000-0002-5052-0982"
}
],
"softwareSuggestions": [
{
"@type": "SoftwareApplication",
"identifier": "knitr",
"name": "knitr",
"provider": {
"@id": "https://cran.r-project.org",
"@type": "Organization",
"name": "Comprehensive R Archive Network (CRAN)",
"url": "https://cran.r-project.org"
},
"sameAs": "https://CRAN.R-project.org/package=knitr"
},
{
"@type": "SoftwareApplication",
"identifier": "reticulate",
"name": "reticulate",
"provider": {
"@id": "https://cran.r-project.org",
"@type": "Organization",
"name": "Comprehensive R Archive Network (CRAN)",
"url": "https://cran.r-project.org"
},
"sameAs": "https://CRAN.R-project.org/package=reticulate"
},
{
"@type": "SoftwareApplication",
"identifier": "rmarkdown",
"name": "rmarkdown",
"provider": {
"@id": "https://cran.r-project.org",
"@type": "Organization",
"name": "Comprehensive R Archive Network (CRAN)",
"url": "https://cran.r-project.org"
},
"sameAs": "https://CRAN.R-project.org/package=rmarkdown"
},
{
"@type": "SoftwareApplication",
"identifier": "testthat",
"name": "testthat",
"version": ">= 3.0.0",
"provider": {
"@id": "https://cran.r-project.org",
"@type": "Organization",
"name": "Comprehensive R Archive Network (CRAN)",
"url": "https://cran.r-project.org"
},
"sameAs": "https://CRAN.R-project.org/package=testthat"
}
],
"softwareRequirements": {
"1": {
"@type": "SoftwareApplication",
"identifier": "grDevices",
"name": "grDevices"
},
"2": {
"@type": "SoftwareApplication",
"identifier": "lattice",
"name": "lattice",
"provider": {
"@id": "https://cran.r-project.org",
"@type": "Organization",
"name": "Comprehensive R Archive Network (CRAN)",
"url": "https://cran.r-project.org"
},
"sameAs": "https://CRAN.R-project.org/package=lattice"
},
"3": {
"@type": "SoftwareApplication",
"identifier": "Rcpp",
"name": "Rcpp",
"provider": {
"@id": "https://cran.r-project.org",
"@type": "Organization",
"name": "Comprehensive R Archive Network (CRAN)",
"url": "https://cran.r-project.org"
},
"sameAs": "https://CRAN.R-project.org/package=Rcpp"
},
"4": {
"@type": "SoftwareApplication",
"identifier": "R",
"name": "R",
"version": ">= 4.0.0"
},
"SystemRequirements": "C++17"
},
"fileSize": "1900.873KB"
}
GitHub Events
Total
- Fork event: 3
- Create event: 94
- Release event: 7
- Issues event: 38
- Watch event: 25
- Delete event: 89
- Issue comment event: 24
- Push event: 804
- Public event: 1
- Gollum event: 5
- Pull request review comment event: 208
- Pull request review event: 101
- Pull request event: 150
Last Year
- Fork event: 3
- Create event: 94
- Release event: 7
- Issues event: 38
- Watch event: 25
- Delete event: 89
- Issue comment event: 24
- Push event: 804
- Public event: 1
- Gollum event: 5
- Pull request review comment event: 208
- Pull request review event: 101
- Pull request event: 150
Issues and Pull Requests
Last synced: 4 months ago
All Time
- Total issues: 13
- Total pull requests: 59
- Average time to close issues: 7 days
- Average time to close pull requests: about 13 hours
- Total issue authors: 2
- Total pull request authors: 2
- Average comments per issue: 0.38
- Average comments per pull request: 0.12
- Merged pull requests: 48
- Bot issues: 0
- Bot pull requests: 3
Past Year
- Issues: 13
- Pull requests: 59
- Average time to close issues: 7 days
- Average time to close pull requests: about 13 hours
- Issue authors: 2
- Pull request authors: 2
- Average comments per issue: 0.38
- Average comments per pull request: 0.12
- Merged pull requests: 48
- Bot issues: 0
- Bot pull requests: 3
Top Authors
Issue Authors
- serkor1 (20)
- dcaseykc (1)
- EmilHvitfeldt (1)
Pull Request Authors
- serkor1 (71)
- dependabot[bot] (3)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 1
-
Total downloads:
- cran 238 last-month
- Total dependent packages: 0
- Total dependent repositories: 0
- Total versions: 2
- Total maintainers: 1
cran.r-project.org: SLmetrics
Machine Learning Performance Evaluation on Steroids
- Homepage: https://slmetrics-docs.gitbook.io/v1
- Documentation: http://cran.r-project.org/web/packages/SLmetrics/SLmetrics.pdf
- License: GPL (≥ 3)
-
Latest release: 0.3-4
published 6 months ago
Rankings
Maintainers (1)
Dependencies
- actions/checkout v4 composite
- actions/setup-python v4 composite
- r-lib/actions/check-r-package v2 composite
- r-lib/actions/setup-pandoc v2 composite
- r-lib/actions/setup-r v2 composite
- r-lib/actions/setup-r-dependencies v2 composite
- actions/checkout v4 composite
- actions/setup-python v4 composite
- actions/upload-artifact v4 composite
- codecov/codecov-action v4 composite
- r-lib/actions/setup-r v2 composite
- r-lib/actions/setup-r-dependencies v2 composite
- R >= 2.10 depends
- Rcpp * imports
- grDevices * imports
- lattice * imports
- knitr * suggests
- lightgbm * suggests
- mlbench * suggests
- reticulate * suggests
- rmarkdown * suggests
- testthat >= 3.0.0 suggests
- xgboost * suggests
- actions/setup-python v5 composite
- actions/checkout v4 composite
- actions/setup-python v5 composite
- quarto-dev/quarto-actions/publish v2 composite
- quarto-dev/quarto-actions/render v2 composite
- quarto-dev/quarto-actions/setup v2 composite
- r-lib/actions/setup-r v2 composite
- r-lib/actions/setup-r-dependencies v2 composite
- r-hub/actions/checkout v1 composite
- r-hub/actions/platform-info v1 composite
- r-hub/actions/run-check v1 composite
- r-hub/actions/setup v1 composite
- r-hub/actions/setup-deps v1 composite
- r-hub/actions/setup-r v1 composite
- PyYAML *
- imblearn *
- numpy *
- scikit-learn *
- scipy *
- torch *
- torchmetrics *