AutoScore
AutoScore: An Interpretable Machine Learning-Based Automatic Clinical Score Generator
Science Score: 46.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
○.zenodo.json file
-
✓DOI references
Found 21 DOI reference(s) in README -
✓Academic publication links
Links to: sciencedirect.com -
✓Committers with academic emails
2 of 5 committers (40.0%) from academic institutions -
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (10.2%) to scientific vocabulary
Repository
AutoScore: An Interpretable Machine Learning-Based Automatic Clinical Score Generator
Statistics
- Stars: 33
- Watchers: 3
- Forks: 5
- Open Issues: 13
- Releases: 0
Metadata Files
README.md
AutoScore: An Interpretable Machine Learning-Based Automatic Clinical
Score Generator
AutoScore is a novel machine learning framework to automate the development of interpretable clinical scoring models. AutoScore consists of six modules: 1) variable ranking with machine learning, 2) variable transformation, 3) score derivation, 4) model selection, 5) domain knowledge-based score fine-tuning, and 6) performance evaluation. The original AutoScore structure is elaborated in this article and its flowchart is shown in the following figure. AutoScore was originally designed for binary outcomes and later extended to survival outcomes and ordinal outcomes. AutoScore could seamlessly generate risk scores using a parsimonious set of variables for different types of clinical outcomes, which can be easily implemented and validated in clinical practice. Moreover, it enables users to build transparent and interpretable clinical scores quickly in a straightforward manner.
Please visit our bookdown page for a full tutorial on AutoScore usage.
Usage
The five pipeline functions constitute the 5-step AutoScore-based process for generating point-based clinical scores for binary, survival and ordinal outcomes.
This 5-step process gives users the flexibility of customization (e.g., determining the final list of variables according to the parsimony plot, and fine-tuning the cutoffs in variable transformation):
- STEP(i):
AutoScore_rank()orAutoScore_rank_Survival()orAutoScore_rank_Ordinal()- Rank variables with machine learning (AutoScore Module 1) - STEP(ii):
AutoScore_parsimony()orAutoScore_parsimony_Survival()orAutoScore_parsimony_Ordinal()- Select the best model with parsimony plot (AutoScore Modules 2+3+4) - STEP(iii):
AutoScore_weighting()orAutoScore_weighting_Survival()orAutoScore_weighting_Ordinal()- Generate the initial score with the final list of variables (Re-run AutoScore Modules 2+3) - STEP(iv):
AutoScore_fine_tuning()orAutoScore_fine_tuning_Survival()orAutoScore_fine_tuning_Ordinal()- Fine-tune the score by revisingcut_vecwith domain knowledge (AutoScore Module 5) - STEP(v):
AutoScore_testing()orAutoScore_testing_Survival()orAutoScore_testing_Ordinal()- Evaluate the final score with ROC analysis (AutoScore Module 6)
We also include several optional functions in the package, which could help with data analysis and result reporting.
Citation
Core paper
- Xie F, Chakraborty B, Ong MEH, Goldstein BA, Liu N. AutoScore: A machine learning-based automatic clinical score generator and its application to mortality prediction using electronic health records. JMIR Medical Informatics 2020; 8(10): e21798.
- Xie F, Ning Y, Liu M, Li S, Saffari SE, Yuan H, Volovici V, Ting DSW, Goldstein BA, Ong MEH, Vaughan R, Chakraborty B, Liu N. A universal AutoScore framework to develop interpretable scoring systems for predicting common types of clinical outcomes. STAR Protocols 2023 Jun; 4(2): 102302.
Method extension
Xie F, Ning Y, Yuan H, Goldstein BA, Ong MEH, Liu N, Chakraborty B. AutoScore-Survival: Developing interpretable machine learning-based time-to-event scores with right-censored survival data. Journal of Biomedical Informatics 2022; 125: 103959.
Saffari SE, Ning Y, Xie F, Chakraborty B, Volovici V, Vaughan R, Ong MEH, Liu N, AutoScore-Ordinal: An interpretable machine learning framework for generating scoring models for ordinal outcomes, BMC Medical Research Methodology 2022; 22: 286.
Ning Y, Li S, Ong ME, Xie F, Chakraborty B, Ting DS, Liu N. A novel interpretable machine learning system to generate clinical risk scores: An application for predicting early mortality or unplanned readmission in a retrospective cohort study. PLOS Digital Health 2022; 1(6): e0000062.
Clinical application
This page provides a collection of clinical applications using AutoScore and its extensions. The application list is categorized according to medical specialties and is updated regularly. However, due to the manual process of updating, we are unable to keep track of all publications.
Contact
- Feng Xie (Email: xief@u.duke.nus.edu)
- Yilin Ning (Email: yilin.ning@duke-nus.edu.sg)
- Nan Liu (Email: liu.nan@duke-nus.edu.sg)
Package installation
Install from GitHub or CRAN:
``` r
From Github
install.packages("devtools") library(devtools) installgithub(repo = "nliulab/AutoScore", buildvignettes = TRUE)
From CRAN (recommended)
install.packages("AutoScore") ```
Load AutoScore package:
r
library(AutoScore)
Owner
- Login: nliulab
- Kind: user
- Location: Singapore
- Company: Duke-NUS Medical School
- Website: http://www.DigitalMedicineLab.org/
- Twitter: nliulab
- Repositories: 3
- Profile: https://github.com/nliulab
LIU Lab - Digital Medicine
GitHub Events
Total
- Watch event: 2
- Push event: 14
- Fork event: 1
Last Year
- Watch event: 2
- Push event: 14
- Fork event: 1
Committers
Last synced: over 2 years ago
Top Committers
| Name | Commits | |
|---|---|---|
| nliulab | 5****b | 102 |
| XIE FENG | x****f@u****u | 97 |
| nyilin | n****l@g****m | 20 |
| siqili0325 | s****i@u****u | 2 |
| Michelle Liu | 5****n | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: over 2 years ago
All Time
- Total issues: 9
- Total pull requests: 1
- Average time to close issues: N/A
- Average time to close pull requests: about 15 hours
- Total issue authors: 5
- Total pull request authors: 1
- Average comments per issue: 2.22
- Average comments per pull request: 0.0
- Merged pull requests: 1
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 6
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 2
- Pull request authors: 0
- Average comments per issue: 2.5
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- shlid007 (4)
- renlok (1)
- Aimusa (1)
- EvaPang2022 (1)
- mythhere99 (1)
Pull Request Authors
- fengx13 (1)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 1
-
Total downloads:
- cran 495 last-month
- Total dependent packages: 0
- Total dependent repositories: 0
- Total versions: 4
- Total maintainers: 1
cran.r-project.org: AutoScore
An Interpretable Machine Learning-Based Automatic Clinical Score Generator
- Homepage: https://github.com/nliulab/AutoScore
- Documentation: http://cran.r-project.org/web/packages/AutoScore/AutoScore.pdf
- License: GPL-2 | GPL-3 [expanded from: GPL (≥ 2)]
-
Latest release: 1.1.0
published 8 months ago
Rankings
Maintainers (1)
Dependencies
- R >= 3.5.0 depends
- Hmisc * imports
- car * imports
- coxed * imports
- dplyr * imports
- ggplot2 * imports
- knitr * imports
- magrittr * imports
- ordinal * imports
- pROC * imports
- plotly * imports
- randomForest * imports
- randomForestSRC * imports
- rlang * imports
- survAUC * imports
- survival * imports
- survminer * imports
- tableone * imports
- tidyr * imports
- rmarkdown * suggests
- rpart * suggests