evalify
Evaluate your biometric verification models literally in seconds.
Science Score: 49.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 3 DOI reference(s) in README -
✓Academic publication links
Links to: zenodo.org -
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (14.3%) to scientific vocabulary
Keywords
Repository
Evaluate your biometric verification models literally in seconds.
Basic Info
Statistics
- Stars: 19
- Watchers: 5
- Forks: 20
- Open Issues: 1
- Releases: 6
Topics
Metadata Files
README.md
evalify
Evaluate Biometric Authentication Models Literally inSeconds.
Installation
Stable release:
bash
pip install evalify
Bleeding edge:
bash
pip install git+https://github.com/ma7555/evalify.git
Used for
Evaluating all biometric authentication models, where the model output is a high-level embeddings known as feature vectors for visual or behaviour biometrics or d-vectors for auditory biometrics.
Usage
```python import numpy as np from evalify import Experiment
rng = np.random.defaultrng() nphotos = 500 embsize = 32 nclasses = 10 X = rng.random((self.nphotos, self.emb_size)) y = rng.integers(self.nclasses, size=self.nphotos)
experiment = Experiment() experiment.run(X, y) experiment.getrocauc() print(experiment.rocauc) print(experiment.findthresholdatfpr(0.01)) ```
How it works
- When you run an experiment, evalify tries all the possible combinations between individuals for authentication based on the
Xandyparameters and returns the results including FPR, TPR, FNR, TNR and ROC AUC.Xis an array of embeddings andyis an array of corresponding targets. - Evalify can find the optimal threshold based on your agreed FPR and desired similarity or distance metric.
Documentation:
Features
- Blazing fast implementation for metrics calculation through optimized einstein sum and vectorized calculations.
- Many operations are dispatched to canonical BLAS, cuBLAS, or other specialized routines.
- Smart sampling options using direct indexing from pre-calculated arrays with total control over sampling strategy and sampling numbers.
- Supports most evaluation metrics:
cosine_similaritypearson_similaritycosine_distanceeuclidean_distanceeuclidean_distance_l2minkowski_distancemanhattan_distancechebyshev_distance
- Computation time for 4 metrics 4.2 million samples experiment is 24 seconds vs 51 minutes if looping using
scipy.spatial.distanceimplemntations.
TODO
- Safer memory allocation. I did not have issues but if you ran out of memory please manually set the
batch_sizeargument.
Contribution
- Contributions are welcomed, and they are greatly appreciated! Every little bit helps, and credit will always be given.
- Please check CONTRIBUTING.md for guidelines.
Citation
- If you use this software, please cite it using the metadata from CITATION.cff
Owner
- Name: ma7555
- Login: ma7555
- Kind: user
- Location: Cairo
- Website: https://www.kaggle.com/ma7555/
- Repositories: 57
- Profile: https://github.com/ma7555
GitHub Events
Total
- Create event: 2
- Issues event: 1
- Release event: 1
- Delete event: 1
- Issue comment event: 2
- Push event: 15
- Pull request event: 4
Last Year
- Create event: 2
- Issues event: 1
- Release event: 1
- Delete event: 1
- Issue comment event: 2
- Push event: 15
- Pull request event: 4
Committers
Last synced: 12 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| ma7555 | m****a@h****m | 65 |
| Mahmoud Bahaa | m****a@t****a | 2 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 1
- Total pull requests: 27
- Average time to close issues: N/A
- Average time to close pull requests: about 1 hour
- Total issue authors: 1
- Total pull request authors: 1
- Average comments per issue: 0.0
- Average comments per pull request: 0.33
- Merged pull requests: 26
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 1
- Pull requests: 2
- Average time to close issues: N/A
- Average time to close pull requests: about 4 hours
- Issue authors: 1
- Pull request authors: 1
- Average comments per issue: 0.0
- Average comments per pull request: 1.0
- Merged pull requests: 2
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- ma7555 (1)
Pull Request Authors
- ma7555 (29)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 1
-
Total downloads:
- pypi 15 last-month
- Total dependent packages: 0
- Total dependent repositories: 1
- Total versions: 6
- Total maintainers: 1
pypi.org: evalify
Evaluate your face or voice verification models literally in seconds.
- Homepage: https://github.com/ma7555/evalify
- Documentation: https://evalify.readthedocs.io/
- License: BSD-3-Clause
-
Latest release: 1.0.0
published over 1 year ago
Rankings
Maintainers (1)
Dependencies
- black 22.1.0
- flake8 4.0.1
- flake8-docstrings ^1.6.0
- isort 5.10.1
- livereload ^2.6.3
- mkdocs ^1.2.3
- mkdocs-autorefs ^0.3.1
- mkdocs-include-markdown-plugin ^3.2.3
- mkdocs-material ^8.1.11
- mkdocs-material-extensions ^1.0.3
- mkdocstrings ^0.18.0
- numpy ^1.16.0
- pandas ^1.3.5
- pip ^22.0.3
- psutil ^5.0.0
- pyreadline ^2.1
- pytest ^7.0.1
- pytest-cov ^3.0.0
- python >=3.7.1,<4.0
- scikit-learn ^1.0.0
- toml ^0.10.2
- tox ^3.24.5
- twine ^3.8.0
- virtualenv ^20.13.1
- actions/checkout v2 composite
- github/codeql-action/analyze v1 composite
- github/codeql-action/autobuild v1 composite
- github/codeql-action/init v1 composite
- actions/checkout v2 composite
- actions/setup-python v2 composite
- codecov/codecov-action v2 composite
- pypa/gh-action-pypi-publish master composite
- actions/checkout v2 composite
- actions/setup-python v2 composite
- mikepenz/release-changelog-builder-action v2.9.0 composite
- peaceiris/actions-gh-pages v3 composite
- pypa/gh-action-pypi-publish release/v1 composite
- softprops/action-gh-release v1 composite