evalify

Evaluate your biometric verification models literally in seconds.

https://github.com/ma7555/evalify

Science Score: 49.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 3 DOI reference(s) in README
  • Academic publication links
    Links to: zenodo.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.3%) to scientific vocabulary

Keywords

evaluation evaluation-framework evaluation-metrics face-recognition face-verification python
Last synced: 6 months ago · JSON representation

Repository

Evaluate your biometric verification models literally in seconds.

Basic Info
  • Host: GitHub
  • Owner: ma7555
  • License: bsd-3-clause
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 3.05 MB
Statistics
  • Stars: 19
  • Watchers: 5
  • Forks: 20
  • Open Issues: 1
  • Releases: 6
Topics
evaluation evaluation-framework evaluation-metrics face-recognition face-verification python
Created about 4 years ago · Last pushed over 1 year ago
Metadata Files
Readme Changelog Contributing License Citation Authors

README.md

evalify

Logo

License DOI Python 3.7 | 3.8 | 3.9 | 3 Release Status CI Status Documentation Status Code style: Ruff PyPI Downloads/Month

Evaluate Biometric Authentication Models Literally inSeconds.

Installation

Stable release:

bash pip install evalify

Bleeding edge:

bash pip install git+https://github.com/ma7555/evalify.git

Used for

Evaluating all biometric authentication models, where the model output is a high-level embeddings known as feature vectors for visual or behaviour biometrics or d-vectors for auditory biometrics.

Usage

```python import numpy as np from evalify import Experiment

rng = np.random.defaultrng() nphotos = 500 embsize = 32 nclasses = 10 X = rng.random((self.nphotos, self.emb_size)) y = rng.integers(self.nclasses, size=self.nphotos)

experiment = Experiment() experiment.run(X, y) experiment.getrocauc() print(experiment.rocauc) print(experiment.findthresholdatfpr(0.01)) ```

How it works

  • When you run an experiment, evalify tries all the possible combinations between individuals for authentication based on the X and y parameters and returns the results including FPR, TPR, FNR, TNR and ROC AUC. X is an array of embeddings and y is an array of corresponding targets.
  • Evalify can find the optimal threshold based on your agreed FPR and desired similarity or distance metric.

Documentation:

Features

  • Blazing fast implementation for metrics calculation through optimized einstein sum and vectorized calculations.
  • Many operations are dispatched to canonical BLAS, cuBLAS, or other specialized routines.
  • Smart sampling options using direct indexing from pre-calculated arrays with total control over sampling strategy and sampling numbers.
  • Supports most evaluation metrics:
    • cosine_similarity
    • pearson_similarity
    • cosine_distance
    • euclidean_distance
    • euclidean_distance_l2
    • minkowski_distance
    • manhattan_distance
    • chebyshev_distance
  • Computation time for 4 metrics 4.2 million samples experiment is 24 seconds vs 51 minutes if looping using scipy.spatial.distance implemntations.

TODO

  • Safer memory allocation. I did not have issues but if you ran out of memory please manually set the batch_size argument.

Contribution

  • Contributions are welcomed, and they are greatly appreciated! Every little bit helps, and credit will always be given.
  • Please check CONTRIBUTING.md for guidelines.

Citation

  • If you use this software, please cite it using the metadata from CITATION.cff

Owner

  • Name: ma7555
  • Login: ma7555
  • Kind: user
  • Location: Cairo

GitHub Events

Total
  • Create event: 2
  • Issues event: 1
  • Release event: 1
  • Delete event: 1
  • Issue comment event: 2
  • Push event: 15
  • Pull request event: 4
Last Year
  • Create event: 2
  • Issues event: 1
  • Release event: 1
  • Delete event: 1
  • Issue comment event: 2
  • Push event: 15
  • Pull request event: 4

Committers

Last synced: 12 months ago

All Time
  • Total Commits: 67
  • Total Committers: 2
  • Avg Commits per committer: 33.5
  • Development Distribution Score (DDS): 0.03
Past Year
  • Commits: 13
  • Committers: 1
  • Avg Commits per committer: 13.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
ma7555 m****a@h****m 65
Mahmoud Bahaa m****a@t****a 2
Committer Domains (Top 20 + Academic)
t2.sa: 1

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 1
  • Total pull requests: 27
  • Average time to close issues: N/A
  • Average time to close pull requests: about 1 hour
  • Total issue authors: 1
  • Total pull request authors: 1
  • Average comments per issue: 0.0
  • Average comments per pull request: 0.33
  • Merged pull requests: 26
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 1
  • Pull requests: 2
  • Average time to close issues: N/A
  • Average time to close pull requests: about 4 hours
  • Issue authors: 1
  • Pull request authors: 1
  • Average comments per issue: 0.0
  • Average comments per pull request: 1.0
  • Merged pull requests: 2
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • ma7555 (1)
Pull Request Authors
  • ma7555 (29)
Top Labels
Issue Labels
enhancement (1) good first issue (1)
Pull Request Labels
feature (6) enhancement (4) documentation (3) fix (2)

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 15 last-month
  • Total dependent packages: 0
  • Total dependent repositories: 1
  • Total versions: 6
  • Total maintainers: 1
pypi.org: evalify

Evaluate your face or voice verification models literally in seconds.

  • Versions: 6
  • Dependent Packages: 0
  • Dependent Repositories: 1
  • Downloads: 15 Last month
Rankings
Forks count: 8.4%
Dependent packages count: 10.1%
Stargazers count: 13.9%
Dependent repos count: 21.6%
Average: 24.2%
Downloads: 66.9%
Maintainers (1)
Last synced: 6 months ago

Dependencies

pyproject.toml pypi
  • black 22.1.0
  • flake8 4.0.1
  • flake8-docstrings ^1.6.0
  • isort 5.10.1
  • livereload ^2.6.3
  • mkdocs ^1.2.3
  • mkdocs-autorefs ^0.3.1
  • mkdocs-include-markdown-plugin ^3.2.3
  • mkdocs-material ^8.1.11
  • mkdocs-material-extensions ^1.0.3
  • mkdocstrings ^0.18.0
  • numpy ^1.16.0
  • pandas ^1.3.5
  • pip ^22.0.3
  • psutil ^5.0.0
  • pyreadline ^2.1
  • pytest ^7.0.1
  • pytest-cov ^3.0.0
  • python >=3.7.1,<4.0
  • scikit-learn ^1.0.0
  • toml ^0.10.2
  • tox ^3.24.5
  • twine ^3.8.0
  • virtualenv ^20.13.1
.github/workflows/codeql-analysis.yml actions
  • actions/checkout v2 composite
  • github/codeql-action/analyze v1 composite
  • github/codeql-action/autobuild v1 composite
  • github/codeql-action/init v1 composite
.github/workflows/dev.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
  • codecov/codecov-action v2 composite
  • pypa/gh-action-pypi-publish master composite
.github/workflows/release.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
  • mikepenz/release-changelog-builder-action v2.9.0 composite
  • peaceiris/actions-gh-pages v3 composite
  • pypa/gh-action-pypi-publish release/v1 composite
  • softprops/action-gh-release v1 composite