interpret

Fit interpretable models. Explain blackbox machine learning.

https://github.com/interpretml/interpret

Science Score: 59.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 28 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org, biorxiv.org, medrxiv.org, researchgate.net, pubmed.ncbi, ncbi.nlm.nih.gov, sciencedirect.com, springer.com, wiley.com, nature.com, frontiersin.org, mdpi.com, ieee.org, acm.org
  • Committers with academic emails
    2 of 48 committers (4.2%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.1%) to scientific vocabulary

Keywords

ai artificial-intelligence bias blackbox differential-privacy explainability explainable-ai explainable-ml gradient-boosting iml interpretability interpretable-ai interpretable-machine-learning interpretable-ml interpretml machine-learning scikit-learn transparency xai

Keywords from Contributors

agents distributed transformers mlops cameratrap ai-system docstring parallel large-language-models interaction
Last synced: 6 months ago · JSON representation

Repository

Fit interpretable models. Explain blackbox machine learning.

Basic Info
  • Host: GitHub
  • Owner: interpretml
  • License: mit
  • Language: C++
  • Default Branch: main
  • Homepage: https://interpret.ml/docs
  • Size: 15.2 MB
Statistics
  • Stars: 6,664
  • Watchers: 147
  • Forks: 767
  • Open Issues: 109
  • Releases: 59
Topics
ai artificial-intelligence bias blackbox differential-privacy explainability explainable-ai explainable-ml gradient-boosting iml interpretability interpretable-ai interpretable-machine-learning interpretable-ml interpretml machine-learning scikit-learn transparency xai
Created almost 7 years ago · Last pushed 6 months ago
Metadata Files
Readme Changelog Contributing License Governance

README.md

InterpretML

Open In Colab Binder License Python Version Package Version Conda Build Status codecov Maintenance

In the beginning machines learned in darkness, and data scientists struggled in the void to explain them.

Let there be light.

InterpretML is an open-source package that incorporates state-of-the-art machine learning interpretability techniques under one roof. With this package, you can train interpretable glassbox models and explain blackbox systems. InterpretML helps you understand your model's global behavior, or understand the reasons behind individual predictions.

Interpretability is essential for: - Model debugging - Why did my model make this mistake? - Feature Engineering - How can I improve my model? - Detecting fairness issues - Does my model discriminate? - Human-AI cooperation - How can I understand and trust the model's decisions? - Regulatory compliance - Does my model satisfy legal requirements? - High-risk applications - Healthcare, finance, judicial, ...

Installation

Python 3.7+ | Linux, Mac, Windows ```sh pip install interpret

OR

conda install -c conda-forge interpret ```

Introducing the Explainable Boosting Machine (EBM)

EBM is an interpretable model developed at Microsoft Research*. It uses modern machine learning techniques like bagging, gradient boosting, and automatic interaction detection to breathe new life into traditional GAMs (Generalized Additive Models). This makes EBMs as accurate as state-of-the-art techniques like random forests and gradient boosted trees. However, unlike these blackbox models, EBMs produce exact explanations and are editable by domain experts.

| Dataset/AUROC | Domain | Logistic Regression | Random Forest | XGBoost | Explainable Boosting Machine | |---------------|---------|:-------------------:|:-------------:|:---------------:|:----------------------------:| | Adult Income | Finance | .907±.003 | .903±.002 | .927±.001 | .928±.002 | | Heart Disease | Medical | .895±.030 | .890±.008 | .851±.018 | .898±.013 | | Breast Cancer | Medical | .995±.005 | .992±.009 | .992±.010 | .995±.006 | | Telecom Churn | Business| .849±.005 | .824±.004 | .828±.010 | .852±.006 | | Credit Fraud | Security| .979±.002 | .950±.007 | .981±.003 | .981±.003 |

Notebook for reproducing table

Supported Techniques

| Interpretability Technique | Type | |-----------------------------|--------------------| | Explainable Boosting | glassbox model | | APLR | glassbox model | | Decision Tree | glassbox model | | Decision Rule List | glassbox model | | Linear/Logistic Regression | glassbox model | | SHAP Kernel Explainer | blackbox explainer | | LIME | blackbox explainer | | Morris Sensitivity Analysis | blackbox explainer | | Partial Dependence | blackbox explainer |

Train a glassbox model

Let's fit an Explainable Boosting Machine

```python from interpret.glassbox import ExplainableBoostingClassifier

ebm = ExplainableBoostingClassifier() ebm.fit(Xtrain, ytrain)

or substitute with LogisticRegression, DecisionTreeClassifier, RuleListClassifier, ...

EBM supports pandas dataframes, numpy arrays, and handles "string" data natively.

```

Understand the model ```python from interpret import show

ebmglobal = ebm.explainglobal() show(ebm_global) ``` Global Explanation Image


Understand individual predictions python ebm_local = ebm.explain_local(X_test, y_test) show(ebm_local) Local Explanation Image


And if you have multiple model explanations, compare them python show([logistic_regression_global, decision_tree_global]) Dashboard Image


If you need to keep your data private, use Differentially Private EBMs (see DP-EBMs)

```python from interpret.privacy import DPExplainableBoostingClassifier, DPExplainableBoostingRegressor

dpebm = DPExplainableBoostingClassifier(epsilon=1, delta=1e-5) # Specify privacy parameters dpebm.fit(Xtrain, ytrain)

show(dpebm.explainglobal()) # Identical function calls to standard EBMs ```



For more information, see the documentation.


EBMs include pairwise interactions by default. For 3-way interactions and higher see this notebook: https://interpret.ml/docs/python/examples/custom-interactions.html


Interpret EBMs can be fit on datasets with 100 million samples in several hours. For larger workloads consider using distributed EBMs on Azure SynapseML: classification EBMs and regression EBMs



Acknowledgements

InterpretML was originally created by (equal contributions): Samuel Jenkins, Harsha Nori, Paul Koch, and Rich Caruana

EBMs are fast derivative of GA2M, invented by: Yin Lou, Rich Caruana, Johannes Gehrke, and Giles Hooker

Many people have supported us along the way. Check out ACKNOWLEDGEMENTS.md!

We also build on top of many great packages. Please check them out!

plotly | dash | scikit-learn | lime | shap | salib | skope-rules | treeinterpreter | gevent | joblib | pytest | jupyter

Citations

InterpretML
"InterpretML: A Unified Framework for Machine Learning Interpretability" (H. Nori, S. Jenkins, P. Koch, and R. Caruana 2019)
@article{nori2019interpretml,
  title={InterpretML: A Unified Framework for Machine Learning Interpretability},
  author={Nori, Harsha and Jenkins, Samuel and Koch, Paul and Caruana, Rich},
  journal={arXiv preprint arXiv:1909.09223},
  year={2019}
}
    
Paper link

Explainable Boosting
"Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission" (R. Caruana, Y. Lou, J. Gehrke, P. Koch, M. Sturm, and N. Elhadad 2015)
@inproceedings{caruana2015intelligible,
  title={Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission},
  author={Caruana, Rich and Lou, Yin and Gehrke, Johannes and Koch, Paul and Sturm, Marc and Elhadad, Noemie},
  booktitle={Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining},
  pages={1721--1730},
  year={2015},
  organization={ACM}
}
    
Paper link
"Accurate intelligible models with pairwise interactions" (Y. Lou, R. Caruana, J. Gehrke, and G. Hooker 2013)
@inproceedings{lou2013accurate,
  title={Accurate intelligible models with pairwise interactions},
  author={Lou, Yin and Caruana, Rich and Gehrke, Johannes and Hooker, Giles},
  booktitle={Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining},
  pages={623--631},
  year={2013},
  organization={ACM}
}
    
Paper link
"Intelligible models for classification and regression" (Y. Lou, R. Caruana, and J. Gehrke 2012)
@inproceedings{lou2012intelligible,
  title={Intelligible models for classification and regression},
  author={Lou, Yin and Caruana, Rich and Gehrke, Johannes},
  booktitle={Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining},
  pages={150--158},
  year={2012},
  organization={ACM}
}
    
Paper link
"Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values" (Zijie J. Wang, Alex Kale, Harsha Nori, Peter Stella, Mark E. Nunnally, Duen Horng Chau, Mihaela Vorvoreanu, Jennifer Wortman Vaughan, Rich Caruana 2022)
@article{wang2022interpretability,
  title={Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values},
  author={Wang, Zijie J and Kale, Alex and Nori, Harsha and Stella, Peter and Nunnally, Mark E and Chau, Duen Horng and Vorvoreanu, Mihaela and Vaughan, Jennifer Wortman and Caruana, Rich},
  journal={arXiv preprint arXiv:2206.15465},
  year={2022}
}
    
Paper link
"Axiomatic Interpretability for Multiclass Additive Models" (X. Zhang, S. Tan, P. Koch, Y. Lou, U. Chajewska, and R. Caruana 2019)
@inproceedings{zhang2019axiomatic,
  title={Axiomatic Interpretability for Multiclass Additive Models},
  author={Zhang, Xuezhou and Tan, Sarah and Koch, Paul and Lou, Yin and Chajewska, Urszula and Caruana, Rich},
  booktitle={Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery \& Data Mining},
  pages={226--234},
  year={2019},
  organization={ACM}
}
    
Paper link
"Distill-and-compare: auditing black-box models using transparent model distillation" (S. Tan, R. Caruana, G. Hooker, and Y. Lou 2018)
@inproceedings{tan2018distill,
  title={Distill-and-compare: auditing black-box models using transparent model distillation},
  author={Tan, Sarah and Caruana, Rich and Hooker, Giles and Lou, Yin},
  booktitle={Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society},
  pages={303--310},
  year={2018},
  organization={ACM}
}
    
Paper link
"Purifying Interaction Effects with the Functional ANOVA: An Efficient Algorithm for Recovering Identifiable Additive Models" (B. Lengerich, S. Tan, C. Chang, G. Hooker, R. Caruana 2019)
@article{lengerich2019purifying,
  title={Purifying Interaction Effects with the Functional ANOVA: An Efficient Algorithm for Recovering Identifiable Additive Models},
  author={Lengerich, Benjamin and Tan, Sarah and Chang, Chun-Hao and Hooker, Giles and Caruana, Rich},
  journal={arXiv preprint arXiv:1911.04974},
  year={2019}
}
    
Paper link
"Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning" (H. Kaur, H. Nori, S. Jenkins, R. Caruana, H. Wallach, J. Wortman Vaughan 2020)
@inproceedings{kaur2020interpreting,
  title={Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning},
  author={Kaur, Harmanpreet and Nori, Harsha and Jenkins, Samuel and Caruana, Rich and Wallach, Hanna and Wortman Vaughan, Jennifer},
  booktitle={Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems},
  pages={1--14},
  year={2020}
}
    
Paper link
"How Interpretable and Trustworthy are GAMs?" (C. Chang, S. Tan, B. Lengerich, A. Goldenberg, R. Caruana 2020)
@article{chang2020interpretable,
  title={How Interpretable and Trustworthy are GAMs?},
  author={Chang, Chun-Hao and Tan, Sarah and Lengerich, Ben and Goldenberg, Anna and Caruana, Rich},
  journal={arXiv preprint arXiv:2006.06466},
  year={2020}
}
    
Paper link

Differential Privacy
"Accuracy, Interpretability, and Differential Privacy via Explainable Boosting" (H. Nori, R. Caruana, Z. Bu, J. Shen, J. Kulkarni 2021)
@inproceedings{pmlr-v139-nori21a,
  title =    {Accuracy, Interpretability, and Differential Privacy via Explainable Boosting},
  author =       {Nori, Harsha and Caruana, Rich and Bu, Zhiqi and Shen, Judy Hanwen and Kulkarni, Janardhan},
  booktitle =    {Proceedings of the 38th International Conference on Machine Learning},
  pages =    {8227--8237},
  year =     {2021},
  volume =   {139},
  series =   {Proceedings of Machine Learning Research},
  publisher =    {PMLR}
}
    
Paper link

LIME
"Why should i trust you?: Explaining the predictions of any classifier" (M. T. Ribeiro, S. Singh, and C. Guestrin 2016)
@inproceedings{ribeiro2016should,
  title={Why should i trust you?: Explaining the predictions of any classifier},
  author={Ribeiro, Marco Tulio and Singh, Sameer and Guestrin, Carlos},
  booktitle={Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining},
  pages={1135--1144},
  year={2016},
  organization={ACM}
}
    
Paper link

SHAP
"A Unified Approach to Interpreting Model Predictions" (S. M. Lundberg and S.-I. Lee 2017)
@incollection{NIPS2017_7062,
 title = {A Unified Approach to Interpreting Model Predictions},
 author = {Lundberg, Scott M and Lee, Su-In},
 booktitle = {Advances in Neural Information Processing Systems 30},
 editor = {I. Guyon and U. V. Luxburg and S. Bengio and H. Wallach and R. Fergus and S. Vishwanathan and R. Garnett},
 pages = {4765--4774},
 year = {2017},
 publisher = {Curran Associates, Inc.},
 url = {https://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf}
}
    
Paper link
"Consistent individualized feature attribution for tree ensembles" (Lundberg, Scott M and Erion, Gabriel G and Lee, Su-In 2018)
@article{lundberg2018consistent,
  title={Consistent individualized feature attribution for tree ensembles},
  author={Lundberg, Scott M and Erion, Gabriel G and Lee, Su-In},
  journal={arXiv preprint arXiv:1802.03888},
  year={2018}
}
    
Paper link
"Explainable machine-learning predictions for the prevention of hypoxaemia during surgery" (S. M. Lundberg et al. 2018)
@article{lundberg2018explainable,
  title={Explainable machine-learning predictions for the prevention of hypoxaemia during surgery},
  author={Lundberg, Scott M and Nair, Bala and Vavilala, Monica S and Horibe, Mayumi and Eisses, Michael J and Adams, Trevor and Liston, David E and Low, Daniel King-Wai and Newman, Shu-Fang and Kim, Jerry and others},
  journal={Nature Biomedical Engineering},
  volume={2},
  number={10},
  pages={749},
  year={2018},
  publisher={Nature Publishing Group}
}
    
Paper link

Sensitivity Analysis
"SALib: An open-source Python library for Sensitivity Analysis" (J. D. Herman and W. Usher 2017)
@article{herman2017salib,
  title={SALib: An open-source Python library for Sensitivity Analysis.},
  author={Herman, Jonathan D and Usher, Will},
  journal={J. Open Source Software},
  volume={2},
  number={9},
  pages={97},
  year={2017}
}
    
Paper link
"Factorial sampling plans for preliminary computational experiments" (M. D. Morris 1991)
@article{morris1991factorial,
  title={},
  author={Morris, Max D},
  journal={Technometrics},
  volume={33},
  number={2},
  pages={161--174},
  year={1991},
  publisher={Taylor \& Francis Group}
}
    
Paper link

Partial Dependence
"Greedy function approximation: a gradient boosting machine" (J. H. Friedman 2001)
@article{friedman2001greedy,
  title={Greedy function approximation: a gradient boosting machine},
  author={Friedman, Jerome H},
  journal={Annals of statistics},
  pages={1189--1232},
  year={2001},
  publisher={JSTOR}
}
    
Paper link

Open Source Software
"Scikit-learn: Machine learning in Python" (F. Pedregosa et al. 2011)
@article{pedregosa2011scikit,
  title={Scikit-learn: Machine learning in Python},
  author={Pedregosa, Fabian and Varoquaux, Ga{\"e}l and Gramfort, Alexandre and Michel, Vincent and Thirion, Bertrand and Grisel, Olivier and Blondel, Mathieu and Prettenhofer, Peter and Weiss, Ron and Dubourg, Vincent and others},
  journal={Journal of machine learning research},
  volume={12},
  number={Oct},
  pages={2825--2830},
  year={2011}
}
    
Paper link
"Collaborative data science" (Plotly Technologies Inc. 2015)
@online{plotly, 
  author = {Plotly Technologies Inc.}, 
  title = {Collaborative data science}, 
  publisher = {Plotly Technologies Inc.}, 
  address = {Montreal, QC}, 
  year = {2015}, 
  url = {https://plot.ly}
}
    
Link
"Joblib: running python function as pipeline jobs" (G. Varoquaux and O. Grisel 2009)
@article{varoquaux2009joblib,
  title={Joblib: running python function as pipeline jobs},
  author={Varoquaux, Ga{\"e}l and Grisel, O},
  journal={packages. python. org/joblib},
  year={2009}
}
    
Link

Videos

External links

Papers that use or compare EBMs

Books that cover EBMs

External tools

Contact us

There are multiple ways to get in touch: - Email us at interpret@microsoft.com - Or, feel free to raise a GitHub issue









































If a tree fell in your random forest, would anyone notice?

Owner

  • Name: InterpretML
  • Login: interpretml
  • Kind: organization
  • Email: interpret@microsoft.com

If a tree fell in your random forest, would anyone notice?

GitHub Events

Total
  • Create event: 43
  • Release event: 12
  • Issues event: 52
  • Watch event: 348
  • Delete event: 35
  • Issue comment event: 158
  • Push event: 555
  • Pull request review event: 8
  • Pull request review comment event: 8
  • Pull request event: 36
  • Fork event: 30
Last Year
  • Create event: 43
  • Release event: 12
  • Issues event: 52
  • Watch event: 348
  • Delete event: 35
  • Issue comment event: 158
  • Push event: 555
  • Pull request review event: 8
  • Pull request review comment event: 8
  • Pull request event: 36
  • Fork event: 30

Committers

Last synced: 9 months ago

All Time
  • Total Commits: 3,544
  • Total Committers: 48
  • Avg Commits per committer: 73.833
  • Development Distribution Score (DDS): 0.441
Past Year
  • Commits: 554
  • Committers: 8
  • Avg Commits per committer: 69.25
  • Development Distribution Score (DDS): 0.108
Top Committers
Name Email Commits
Paul Koch c****e@k****a 1,982
Interpret ML i****l@o****m 1,325
nopdive n****e@g****m 45
dependabot[bot] 4****] 34
Eduardo de Leon e****n@m****m 21
Jessica Wolk 1****s 17
Luis França l****a@m****m 17
wamartin-aml w****n@m****m 14
Harsha Nori h****i@l****m 13
DerWeh a****h@w****e 8
Ilya Matiach i****t@m****m 6
Laure Feuillet l****t@c****a 5
Mathias von Ottenbreit 4****t 5
Mahmoud Mohammadi m****a@m****m 3
Erik Cederstrand e****k@c****k 3
Fabian Degen 1****n 3
Ashton-Sidhu a****4@g****m 3
Ben Lengerich b****h@g****m 3
mtl-tony 4****y 3
Xuezhou Zhang z****3@g****m 2
Microsoft Open Source m****e 2
Jay Wong x****7@g****m 2
Brandon Greenwell 1****1 2
Bamdev Mishra b****m@g****m 2
Mengchen Zhu m****u@g****m 1
Microsoft GitHub User m****s@m****m 1
Prateek Chanda p****1@g****m 1
EKC (Erik Cederstrand) e****c@n****m 1
Rahul r****4@s****n 1
Vítor Bernardes 3****s 1
and 18 more...
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 204
  • Total pull requests: 140
  • Average time to close issues: 6 months
  • Average time to close pull requests: 28 days
  • Total issue authors: 157
  • Total pull request authors: 26
  • Average comments per issue: 3.55
  • Average comments per pull request: 1.52
  • Merged pull requests: 108
  • Bot issues: 1
  • Bot pull requests: 54
Past Year
  • Issues: 37
  • Pull requests: 41
  • Average time to close issues: 16 days
  • Average time to close pull requests: 2 days
  • Issue authors: 28
  • Pull request authors: 7
  • Average comments per issue: 1.78
  • Average comments per pull request: 2.05
  • Merged pull requests: 31
  • Bot issues: 0
  • Bot pull requests: 15
Top Authors
Issue Authors
  • DerWeh (8)
  • JWKKWJ123 (7)
  • sadsquirrel369 (6)
  • mtl-tony (4)
  • brandongreenwell-8451 (4)
  • Tejamr (3)
  • jfleh (3)
  • basnetpro3 (3)
  • onacrame (2)
  • JoshuaC3 (2)
  • lukedex (2)
  • bverhoeff (2)
  • Saurid3 (2)
  • twright8 (2)
  • bgreenwell (2)
Pull Request Authors
  • dependabot[bot] (54)
  • DerWeh (24)
  • mathias-von-ottenbreit (14)
  • degenfabian (6)
  • paulbkoch (6)
  • mtl-tony (5)
  • brandongreenwell-8451 (4)
  • alvanli (2)
  • luisffranca (2)
  • quocdat-le-insacvl (2)
  • xiaohk (2)
  • busFred (2)
  • RahulK4102 (2)
  • Krzys25 (2)
  • spyrosUofA (2)
Top Labels
Issue Labels
enhancement (20) bug (16) good first issue (1) question (1) dependencies (1) javascript (1)
Pull Request Labels
dependencies (54) python (32) javascript (20)

Packages

  • Total packages: 6
  • Total downloads:
    • pypi 703,267 last-month
    • cran 426 last-month
    • npm 365 last-month
  • Total docker downloads: 163,168
  • Total dependent packages: 30
    (may contain duplicates)
  • Total dependent repositories: 210
    (may contain duplicates)
  • Total versions: 237
  • Total maintainers: 2
pypi.org: interpret

Fit interpretable models. Explain blackbox machine learning.

  • Versions: 66
  • Dependent Packages: 24
  • Dependent Repositories: 95
  • Downloads: 269,517 Last month
  • Docker Downloads: 81,564
Rankings
Stargazers count: 0.4%
Dependent packages count: 0.5%
Average: 1.1%
Downloads: 1.1%
Docker downloads count: 1.2%
Dependent repos count: 1.5%
Forks count: 1.7%
Maintainers (1)
Last synced: 6 months ago
pypi.org: interpret-core

Fit interpretable models. Explain blackbox machine learning.

  • Versions: 45
  • Dependent Packages: 5
  • Dependent Repositories: 115
  • Downloads: 433,698 Last month
  • Docker Downloads: 81,604
Rankings
Stargazers count: 0.4%
Downloads: 0.6%
Docker downloads count: 1.2%
Average: 1.3%
Dependent repos count: 1.4%
Forks count: 1.7%
Dependent packages count: 2.4%
Maintainers (1)
Last synced: 6 months ago
proxy.golang.org: github.com/interpretml/interpret
  • Versions: 60
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent packages count: 5.5%
Average: 5.7%
Dependent repos count: 5.8%
Last synced: 6 months ago
npmjs.org: @interpretml/interpret-inline

Interpret inline library for rendering visualizations across all notebook environments.

  • Versions: 40
  • Dependent Packages: 1
  • Dependent Repositories: 0
  • Downloads: 365 Last month
Rankings
Stargazers count: 1.6%
Forks count: 1.7%
Downloads: 6.0%
Average: 10.2%
Dependent packages count: 16.2%
Dependent repos count: 25.3%
Maintainers (1)
Last synced: 6 months ago
pypi.org: powerlift

Interactive Benchmarking for Machine Learning.

  • Versions: 15
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 52 Last month
Rankings
Stargazers count: 0.4%
Forks count: 1.7%
Dependent packages count: 6.6%
Average: 15.9%
Dependent repos count: 30.6%
Downloads: 40.1%
Maintainers (1)
Last synced: 6 months ago
cran.r-project.org: interpret

Fit Interpretable Machine Learning Models

  • Versions: 11
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 426 Last month
Rankings
Stargazers count: 0.0%
Forks count: 0.1%
Average: 18.6%
Downloads: 27.6%
Dependent packages count: 29.8%
Dependent repos count: 35.5%
Maintainers (1)
Last synced: 6 months ago