PyCM

PyCM: Multiclass confusion matrix library in Python - Published in JOSS (2018)

https://github.com/sepandhaghighi/pycm

Science Score: 98.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 11 DOI reference(s) in README and JOSS metadata
  • Academic publication links
    Links to: joss.theoj.org, zenodo.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
    Published in Journal of Open Source Software

Keywords

accuracy ai artificial-intelligence classification confusion-matrix data data-analysis data-mining data-science deep-learning deeplearning evaluation machine-learning mathematics matrix ml multiclass-classification neural-network statistical-analysis statistics

Keywords from Contributors

graph-generation cryptocurrencies simulator chemistry fuel-cell opem pem physics physics-simulation electrochemistry
Last synced: 4 months ago · JSON representation ·

Repository

Multi-class confusion matrix library in Python

Basic Info
  • Host: GitHub
  • Owner: sepandhaghighi
  • License: mit
  • Language: Python
  • Default Branch: master
  • Homepage: http://pycm.io
  • Size: 11.8 MB
Statistics
  • Stars: 1,484
  • Watchers: 35
  • Forks: 125
  • Open Issues: 16
  • Releases: 47
Topics
accuracy ai artificial-intelligence classification confusion-matrix data data-analysis data-mining data-science deep-learning deeplearning evaluation machine-learning mathematics matrix ml multiclass-classification neural-network statistical-analysis statistics
Created almost 8 years ago · Last pushed 4 months ago
Metadata Files
Readme Changelog Contributing Funding License Code of conduct Citation Security Authors

README.md

PyCM: Python Confusion Matrix


built with Python3 GitHub repo size PyPI version Document Discord Channel

Overview

PyCM is a multi-class confusion matrix library written in Python that supports both input data vectors and direct matrix, and a proper tool for post-classification model evaluation that supports most classes and overall statistics parameters. PyCM is the swiss-army knife of confusion matrices, targeted mainly at data scientists that need a broad array of metrics for predictive models and accurate evaluation of a large variety of classifiers.

Fig1. ConfusionMatrix Block Diagram

Open Hub
PyPI Counter PyPI Downloads
Github Stars
Branch master dev
CI
Code Quality CodeFactor

Installation

⚠️ PyCM 4.3 is the last version to support Python 3.6

⚠️ PyCM 3.9 is the last version to support Python 3.5

⚠️ PyCM 2.4 is the last version to support Python 2.7 & Python 3.4

⚠️ Plotting capability requires Matplotlib (>= 3.0.0) or Seaborn (>= 0.9.1)

PyPI

Source code

Conda

MATLAB

  • Download and install MATLAB (>=8.5, 64/32 bit)
  • Download and install Python3.x (>=3.7, 64/32 bit)
    • [x] Select Add to PATH option
    • [x] Select Install pip option
  • Run pip install pycm
  • Configure Python interpreter

```matlab

pyversion PYTHONEXECUTABLEFULL_PATH ```

Usage

From vector

```pycon

from pycm import * yactu = [2, 0, 2, 2, 0, 1, 1, 2, 2, 0, 1, 2] ypred = [0, 0, 2, 1, 0, 2, 1, 0, 2, 0, 2, 2] cm = ConfusionMatrix(actualvector=yactu, predictvector=ypred) cm.classes [0, 1, 2] cm.table {0: {0: 3, 1: 0, 2: 0}, 1: {0: 0, 1: 1, 2: 2}, 2: {0: 2, 1: 1, 2: 3}} cm.print_matrix() Predict 0 1 2
Actual 0 3 0 0

1 0 1 2

2 2 1 3

cm.printnormalizedmatrix() Predict 0 1 2
Actual 0 1.0 0.0 0.0

1 0.0 0.33333 0.66667

2 0.33333 0.16667 0.5

cm.stat(summary=True) Overall Statistics :

ACC Macro 0.72222 F1 Macro 0.56515 FPR Macro 0.22222 Kappa 0.35484 Overall ACC 0.58333 PPV Macro 0.56667 SOA1(Landis & Koch) Fair TPR Macro 0.61111 Zero-one Loss 5

Class Statistics :

Classes 0 1 2
ACC(Accuracy) 0.83333 0.75 0.58333
AUC(Area under the ROC curve) 0.88889 0.61111 0.58333
AUCI(AUC value interpretation) Very Good Fair Poor
F1(F1 score - harmonic mean of precision and sensitivity) 0.75 0.4 0.54545
FN(False negative/miss/type 2 error) 0 2 3
FP(False positive/type 1 error/false alarm) 2 1 2
FPR(Fall-out or false positive rate) 0.22222 0.11111 0.33333
N(Condition negative) 9 9 6
P(Condition positive or support) 3 3 6
POP(Population) 12 12 12
PPV(Precision or positive predictive value) 0.6 0.5 0.6
TN(True negative/correct rejection) 7 8 4
TON(Test outcome negative) 7 10 7
TOP(Test outcome positive) 5 2 5
TP(True positive/hit) 3 1 3
TPR(Sensitivity, recall, hit rate, or true positive rate) 1.0 0.33333 0.5

```

Direct CM

```pycon

from pycm import * cm2 = ConfusionMatrix(matrix={"Class1": {"Class1": 1, "Class2": 2}, "Class2": {"Class1": 0, "Class2": 5}}) cm2 pycm.ConfusionMatrix(classes: ['Class1', 'Class2']) cm2.classes ['Class1', 'Class2'] cm2.print_matrix() Predict Class1 Class2
Actual Class1 1 2

Class2 0 5

cm2.printnormalizedmatrix() Predict Class1 Class2
Actual Class1 0.33333 0.66667

Class2 0.0 1.0

cm2.stat(summary=True) Overall Statistics :

ACC Macro 0.75 F1 Macro 0.66667 FPR Macro 0.33333 Kappa 0.38462 Overall ACC 0.75 PPV Macro 0.85714 SOA1(Landis & Koch) Fair TPR Macro 0.66667 Zero-one Loss 2

Class Statistics :

Classes Class1 Class2
ACC(Accuracy) 0.75 0.75
AUC(Area under the ROC curve) 0.66667 0.66667
AUCI(AUC value interpretation) Fair Fair
F1(F1 score - harmonic mean of precision and sensitivity) 0.5 0.83333
FN(False negative/miss/type 2 error) 2 0
FP(False positive/type 1 error/false alarm) 0 2
FPR(Fall-out or false positive rate) 0.0 0.66667
N(Condition negative) 5 3
P(Condition positive or support) 3 5
POP(Population) 8 8
PPV(Precision or positive predictive value) 1.0 0.71429
TN(True negative/correct rejection) 5 1
TON(Test outcome negative) 7 1
TOP(Test outcome positive) 1 7
TP(True positive/hit) 1 5
TPR(Sensitivity, recall, hit rate, or true positive rate) 0.33333 1.0

```

  • matrix() and normalized_matrix() renamed to print_matrix() and print_normalized_matrix() in version 1.5

Activation threshold

threshold is added in version 0.9 for real value prediction. For more information visit Example3

Load from file

file is added in version 0.9.5 in order to load saved confusion matrix with .obj format generated by save_obj method.

For more information visit Example4

Sample weights

sample_weight is added in version 1.2

For more information visit Example5

Transpose

transpose is added in version 1.2 in order to transpose input matrix (only in Direct CM mode)

Relabel

relabel method is added in version 1.5 in order to change ConfusionMatrix classnames.

```pycon

cm.relabel(mapping={0: "L1", 1: "L2", 2: "L3"}) cm pycm.ConfusionMatrix(classes: ['L1', 'L2', 'L3']) ```

Position

position method is added in version 2.8 in order to find the indexes of observations in predict_vector which made TP, TN, FP, FN.

```pycon

cm.position() {0: {'FN': [], 'FP': [0, 7], 'TP': [1, 4, 9], 'TN': [2, 3, 5, 6, 8, 10, 11]}, 1: {'FN': [5, 10], 'FP': [3], 'TP': [6], 'TN': [0, 1, 2, 4, 7, 8, 9, 11]}, 2: {'FN': [0, 3, 7], 'FP': [5, 10], 'TP': [2, 8, 11], 'TN': [1, 4, 6, 9]}} ```

To array

to_array method is added in version 2.9 in order to returns the confusion matrix in the form of a NumPy array. This can be helpful to apply different operations over the confusion matrix for different purposes such as aggregation, normalization, and combination.

```pycon

cm.toarray() array([[3, 0, 0], [0, 1, 2], [2, 1, 3]]) cm.toarray(normalized=True) array([[1. , 0. , 0. ], [0. , 0.33333, 0.66667], [0.33333, 0.16667, 0.5 ]]) cm.toarray(normalized=True, onevsall=True, classname="L1") array([[1. , 0. ], [0.22222, 0.77778]]) ```

Combine

combine method is added in version 3.0 in order to merge two confusion matrices. This option will be useful in mini-batch learning.

```pycon

cmcombined = cm2.combine(cm3) cmcombined.print_matrix() Predict Class1 Class2
Actual Class1 2 4

Class2 0 10

```

Plot

plot method is added in version 3.0 in order to plot a confusion matrix using Matplotlib or Seaborn.

```pycon

cm.plot() ```

```pycon

from matplotlib import pyplot as plt cm.plot(cmap=plt.cm.Greens, numberlabel=True, plotlib="matplotlib") ```

```pycon

cm.plot(cmap=plt.cm.Reds, normalized=True, numberlabel=True, plotlib="seaborn") ```

ROC curve

ROCCurve, added in version 3.7, is devised to compute the Receiver Operating Characteristic (ROC) or simply ROC curve. In ROC curves, the Y axis represents the True Positive Rate, and the X axis represents the False Positive Rate. Thus, the ideal point is located at the top left of the curve, and a larger area under the curve represents better performance. ROC curve is a graphical representation of binary classifiers' performance. In PyCM, ROCCurve binarizes the output based on the "One vs. Rest" strategy to provide an extension of ROC for multi-class classifiers. Getting the actual labels vector, the target probability estimates of the positive classes, and the list of ordered labels of classes, this method is able to compute and plot TPR-FPR pairs for different discrimination thresholds and compute the area under the ROC curve.

```pycon

crv = ROCCurve(actualvector=np.array([1, 1, 2, 2]), probs=np.array([[0.1, 0.9], [0.4, 0.6], [0.35, 0.65], [0.8, 0.2]]), classes=[2, 1]) crv.thresholds [0.1, 0.2, 0.35, 0.4, 0.6, 0.65, 0.8, 0.9] auctrp = crv.area() auctrp[1] 0.75 auctrp[2] 0.75 ```

Precision-Recall curve

PRCurve, added in version 3.7, is devised to compute the Precision-Recall curve in which the Y axis represents the Precision, and the X axis represents the Recall of a classifier. Thus, the ideal point is located at the top right of the curve, and a larger area under the curve represents better performance. Precision-Recall curve is a graphical representation of binary classifiers' performance. In PyCM, PRCurve binarizes the output based on the "One vs. Rest" strategy to provide an extension of this curve for multi-class classifiers. Getting the actual labels vector, the target probability estimates of the positive classes, and the list of ordered labels of classes, this method is able to compute and plot Precision-Recall pairs for different discrimination thresholds and compute the area under the curve.

```pycon

crv = PRCurve(actualvector=np.array([1, 1, 2, 2]), probs=np.array([[0.1, 0.9], [0.4, 0.6], [0.35, 0.65], [0.8, 0.2]]), classes=[2, 1]) crv.thresholds [0.1, 0.2, 0.35, 0.4, 0.6, 0.65, 0.8, 0.9] auctrp = crv.area() auctrp[1] 0.29166666666666663 auctrp[2] 0.29166666666666663 ```

Parameter recommender

This option has been added in version 1.9 to recommend the most related parameters considering the characteristics of the input dataset. The suggested parameters are selected according to some characteristics of the input such as being balance/imbalance and binary/multi-class. All suggestions can be categorized into three main groups: imbalanced dataset, binary classification for a balanced dataset, and multi-class classification for a balanced dataset. The recommendation lists have been gathered according to the respective paper of each parameter and the capabilities which had been claimed by the paper.

```pycon

cm.imbalance False cm.binary False cm.recommended_list ['MCC', 'TPR Micro', 'ACC', 'PPV Macro', 'BCD', 'Overall MCC', 'Hamming Loss', 'TPR Macro', 'Zero-one Loss', 'ERR', 'PPV Micro', 'Overall ACC']

```

is_imbalanced parameter has been added in version 3.3, so the user can indicate whether the concerned dataset is imbalanced or not. As long as the user does not provide any information in this regard, the automatic detection algorithm will be used.

```pycon

cm = ConfusionMatrix(yactu, ypred, isimbalanced=True) cm.imbalance True cm = ConfusionMatrix(yactu, ypred, isimbalanced=False) cm.imbalance False ```

Compare

In version 2.0, a method for comparing several confusion matrices is introduced. This option is a combination of several overall and class-based benchmarks. Each of the benchmarks evaluates the performance of the classification algorithm from good to poor and give them a numeric score. The score of good and poor performances are 1 and 0, respectively.

After that, two scores are calculated for each confusion matrices, overall and class-based. The overall score is the average of the score of seven overall benchmarks which are Landis & Koch, Cramer, Matthews, Goodman-Kruskal's Lambda A, Goodman-Kruskal's Lambda B, Krippendorff's Alpha, and Pearson's C. In the same manner, the class-based score is the average of the score of six class-based benchmarks which are Positive Likelihood Ratio Interpretation, Negative Likelihood Ratio Interpretation, Discriminant Power Interpretation, AUC value Interpretation, Matthews Correlation Coefficient Interpretation and Yule's Q Interpretation. It should be noticed that if one of the benchmarks returns none for one of the classes, that benchmarks will be eliminated in total averaging. If the user sets weights for the classes, the averaging over the value of class-based benchmark scores will transform to a weighted average.

If the user sets the value of by_class boolean input True, the best confusion matrix is the one with the maximum class-based score. Otherwise, if a confusion matrix obtains the maximum of both overall and class-based scores, that will be reported as the best confusion matrix, but in any other case, the compared object doesn’t select the best confusion matrix.

```pycon

cm2 = ConfusionMatrix(matrix={0: {0: 2, 1: 50, 2: 6}, 1: {0: 5, 1: 50, 2: 3}, 2: {0: 1, 1: 7, 2: 50}}) cm3 = ConfusionMatrix(matrix={0: {0: 50, 1: 2, 2: 6}, 1: {0: 50, 1: 5, 2: 3}, 2: {0: 1, 1: 55, 2: 2}}) cp = Compare({"cm2": cm2, "cm3": cm3}) print(cp) Best : cm2

Rank Name Class-Score Overall-Score 1 cm2 0.50278 0.58095 2 cm3 0.33611 0.52857

cp.best pycm.ConfusionMatrix(classes: [0, 1, 2]) cp.sorted ['cm2', 'cm3'] cp.best_name 'cm2' ```

Multilabel confusion matrix

From version 4.0, MultiLabelCM has been added to calculate class-wise or sample-wise multilabel confusion matrices. In class-wise mode, confusion matrices are calculated for each class, and in sample-wise mode, they are generated per sample. All generated confusion matrices are binarized with a one-vs-rest transformation.

```pycon

mlcm = MultiLabelCM(actualvector=[{"cat", "bird"}, {"dog"}], predictvector=[{"cat"}, {"dog", "bird"}], classes=["cat", "dog", "bird"]) mlcm.actualvectormultihot [[1, 0, 1], [0, 1, 0]] mlcm.predictvectormultihot [[1, 0, 0], [0, 1, 1]] mlcm.getcmbyclass("cat").printmatrix() Predict 0 1
Actual 0 1 0

1 0 1

mlcm.getcmbysample(0).printmatrix() Predict 0 1
Actual 0 1 0

1 1 1

```

Online help

online_help function is added in version 1.1 in order to open each statistics definition in web browser

```pycon

from pycm import onlinehelp onlinehelp("J") onlinehelp("SOA1(Landis & Koch)") onlinehelp(2) ```

  • List of items are available by calling online_help() (without argument)
  • If PyCM website is not available, set alt_link = True (new in version 2.4)

Screen record

Try PyCM in your browser!

PyCM can be used online in interactive Jupyter Notebooks via the Binder or Colab services! Try it out now! :

Binder

Google Colab

  • Check Examples in Document folder

Issues & bug reports

  1. Fill an issue and describe it. We'll check it ASAP!
    • Please complete the issue template
  2. Discord : https://discord.com/invite/zqpU2b3J3f
  3. Website : https://www.pycm.io
  4. Mailing List : https://mail.python.org/mailman3/lists/pycm.python.org/
  5. Email : info@pycm.io

Acknowledgments

NLnet foundation has supported the PyCM project from version 4.3 to 4.7 through the NGI0 Commons Fund. This fund is set up by NLnet foundation with funding from the European Commission's Next Generation Internet program, administered by DG Communications Networks, Content, and Technology under grant agreement No 101135429.

NLnet foundation   NGI0 Commons

NLnet foundation has supported the PyCM project from version 3.6 to 4.0 through the NGI Assure Fund. This fund is set up by NLnet foundation with funding from the European Commission's Next Generation Internet program, administered by DG Communications Networks, Content, and Technology under grant agreement No 957073.

NLnet foundation   NGI Assure

Python Software Foundation (PSF) grants PyCM library partially for version 3.7. PSF is the organization behind Python. Their mission is to promote, protect, and advance the Python programming language and to support and facilitate the growth of a diverse and international community of Python programmers.

Python Software Foundation

Some parts of the infrastructure for this project are supported by:

DigitalOcean

Cite

If you use PyCM in your research, we would appreciate citations to the following paper:

Haghighi, S., Jasemi, M., Hessabi, S. and Zolanvari, A., 2018. PyCM: Multiclass confusion matrix library in Python. Journal of Open Source Software, 3(25), p.729.

bibtex @article{Haghighi2018, doi = {10.21105/joss.00729}, url = {https://doi.org/10.21105/joss.00729}, year = {2018}, month = {may}, publisher = {The Open Journal}, volume = {3}, number = {25}, pages = {729}, author = {Sepand Haghighi and Masoomeh Jasemi and Shaahin Hessabi and Alireza Zolanvari}, title = {{PyCM}: Multiclass confusion matrix library in Python}, journal = {Journal of Open Source Software} }

Download PyCM.bib

JOSS
Zenodo DOI

Show your support

Star this repo

Give a ⭐️ if this project helped you!

Donate to our project

If you do like our project and we hope that you do, can you please support us? Our project is not and is never going to be working for profit. We need the money just so we can continue doing what we do ;-) .

PyCM Donation

Owner

  • Name: Sepand Haghighi
  • Login: sepandhaghighi
  • Kind: user
  • Location: Aalborg, Denmark
  • Company: Denu

Open Source Enthusiast

JOSS Publication

PyCM: Multiclass confusion matrix library in Python
Published
May 29, 2018
Volume 3, Issue 25, Page 729
Authors
Sepand Haghighi ORCID
Sharif University of Technology
Masoomeh Jasemi ORCID
Sharif University of Technology
Shaahin Hessabi ORCID
Sharif University of Technology
Alireza Zolanvari ORCID
Amirkabir University of Technology
Editor
Ariel Rokem ORCID
Tags
confusion-matrix classification statistics statistical-analysis analysis machine-learning data-analysis python

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
title: "pycm"
abstract: "PyCM is a multi-class confusion matrix library written in Python that supports both input data vectors and direct matrix, and a proper tool for post-classification model evaluation that supports most classes and overall statistics parameters. PyCM is the swiss-army knife of confusion matrices, targeted mainly at data scientists that need a broad array of metrics for predictive models and accurate evaluation of a large variety of classifiers."
authors:
  - family-names: "Haghighi"
    given-names: "Sepand"
  - family-names: "Zolanvari"
    given-names: "Alireza"
  - family-names: "Sabouri"
    given-names: "Sadra"
version: 3.3
date-released: 2021-10-27
repository-code: "https://github.com/sepandhaghighi/pycm"
url: "https://www.pycm.io"
license: MIT
keywords:
    - "confusion matrix"
    - "python"
    - "F-score"
    - "Accuracy"
preferred-citation:
  type: article
  authors:
  - family-names: "Haghighi"
    given-names: "Sepand"
    orcid: "https://orcid.org/0000-0001-9450-2375"
  - family-names: "Jasemi"
    given-names: "Masoomeh"
    orcid: "https://orcid.org/0000-0002-4831-1698"
  - family-names: "Hessabi"
    given-names: "Shaahin"
    orcid: "https://orcid.org/0000-0003-3193-2567"
  - family-names: "Zolanvari"
    given-names: "Alireza"
    orcid: "https://orcid.org/0000-0003-2367-8343"
  doi: "10.21105/joss.00729"
  journal: "Journal of Open Source Software"
  month: 5
  start: 729 # First page number
  end: 729 # Last page number
  title: "PyCM: Multiclass confusion matrix library in Python"
  issue: 25
  volume: 3
  year: 2018

Papers & Mentions

Total mentions: 2

Data-driven classification of the certainty of scholarly assertions
Last synced: 2 months ago
A novel hand-crafted with deep learning features based fusion model for COVID-19 diagnosis and classification using chest X-ray images
Last synced: 2 months ago

GitHub Events

Total
  • Create event: 31
  • Issues event: 11
  • Release event: 3
  • Watch event: 36
  • Delete event: 30
  • Issue comment event: 39
  • Push event: 73
  • Pull request event: 65
  • Pull request review comment event: 44
  • Pull request review event: 69
  • Fork event: 4
Last Year
  • Create event: 31
  • Issues event: 11
  • Release event: 3
  • Watch event: 36
  • Delete event: 30
  • Issue comment event: 39
  • Push event: 73
  • Pull request event: 65
  • Pull request review comment event: 44
  • Pull request review event: 69
  • Fork event: 4

Committers

Last synced: 5 months ago

All Time
  • Total Commits: 2,849
  • Total Committers: 18
  • Avg Commits per committer: 158.278
  • Development Distribution Score (DDS): 0.234
Past Year
  • Commits: 31
  • Committers: 5
  • Avg Commits per committer: 6.2
  • Development Distribution Score (DDS): 0.613
Top Committers
Name Email Commits
sepandhaghighi s****i@y****m 2,182
sadrasabouri s****a@g****m 285
alirezazolanvari a****3@g****m 161
pyup-bot g****t@p****o 81
dependabot[bot] 4****] 50
dependabot-preview[bot] 2****] 36
alirezazolanvari a****3@g****m 20
geet w****4@g****m 9
Negar Zabetian n****n@g****m 5
sadrasabouri s****i@g****m 4
AmirHosein Rostami 3****e 3
Kwame Porter Robinson k****n@g****m 3
Lewi Uberg 4****g 3
Mohammad Mahdi Rahimi m****6@G****m 3
Masi_Jsm m****i@g****m 1
Sohee Yang s****g@n****m 1
cclauss c****s@m****m 1
the-lay i****n@g****m 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 4 months ago

All Time
  • Total issues: 41
  • Total pull requests: 178
  • Average time to close issues: 10 months
  • Average time to close pull requests: 5 days
  • Total issue authors: 14
  • Total pull request authors: 7
  • Average comments per issue: 1.51
  • Average comments per pull request: 1.5
  • Merged pull requests: 154
  • Bot issues: 1
  • Bot pull requests: 58
Past Year
  • Issues: 9
  • Pull requests: 63
  • Average time to close issues: 6 months
  • Average time to close pull requests: 5 days
  • Issue authors: 3
  • Pull request authors: 5
  • Average comments per issue: 0.22
  • Average comments per pull request: 1.05
  • Merged pull requests: 50
  • Bot issues: 1
  • Bot pull requests: 19
Top Authors
Issue Authors
  • sepandhaghighi (25)
  • tanjiu (3)
  • alirezazolanvari (3)
  • lewiuberg (2)
  • manjaneqx (1)
  • adrianog (1)
  • huhang14 (1)
  • myacinecoding (1)
  • ghnreigns (1)
  • elliestath (1)
  • fhausmann (1)
  • dillonroach (1)
  • bcdarwin (1)
Pull Request Authors
  • sepandhaghighi (92)
  • dependabot[bot] (80)
  • sadrasabouri (27)
  • alirezazolanvari (14)
  • AHReccese (6)
  • tosemml (1)
  • sheetcoder (1)
Top Labels
Issue Labels
enhancement (11) Document (10) new feature (8) test (4) bug (3) discussion (2) question (1) warning (1) installation (1) minor (1)
Pull Request Labels
dependencies (83) Document (49) nlnet (43) enhancement (40) test (31) minor (19) new feature (15) release (13) refactoring (10) website (9) bug (5) python (4) performance (3) psf (2) installation (1) stale (1) major (1) warning (1) broken link (1)

Packages

  • Total packages: 2
  • Total downloads:
    • pypi 175,398 last-month
  • Total docker downloads: 36
  • Total dependent packages: 4
    (may contain duplicates)
  • Total dependent repositories: 50
    (may contain duplicates)
  • Total versions: 52
  • Total maintainers: 3
pypi.org: pycm

Multi-class confusion matrix library in Python

  • Versions: 48
  • Dependent Packages: 4
  • Dependent Repositories: 50
  • Downloads: 175,398 Last month
  • Docker Downloads: 36
Rankings
Downloads: 1.5%
Stargazers count: 1.8%
Dependent packages count: 1.9%
Dependent repos count: 2.1%
Docker downloads count: 2.2%
Average: 2.3%
Forks count: 4.3%
Last synced: 4 months ago
proxy.golang.org: github.com/sepandhaghighi/pycm
  • Versions: 4
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent packages count: 5.6%
Average: 5.8%
Dependent repos count: 6.0%
Last synced: 4 months ago

Dependencies

dev-requirements.txt pypi
  • art ==5.6 development
  • bandit >=1.5.1 development
  • codecov >=2.0.15 development
  • matplotlib >=3.0.0 development
  • numpy ==1.22.3 development
  • pydocstyle >=3.0.0 development
  • pytest >=4.3.1 development
  • pytest-cov >=2.6.1 development
  • seaborn >=0.9.1 development
  • setuptools >=40.8.0 development
  • vulture >=1.0 development
requirements.txt pypi
  • art >=1.8
  • numpy >=1.9.0
.github/workflows/publish_conda.yaml actions
  • actions/checkout v1 composite
  • sepandhaghighi/conda-package-publish-action v1.2 composite
.github/workflows/publish_pypi.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v1 composite
.github/workflows/test.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
docker/Dockerfile docker
  • ubuntu 16.04 build
setup.py pypi