https://github.com/awslabs/cis-matching-tasks
A package for constructing confidence intervals for error rates in matching tasks such as 1:1 face and speaker verification.
Science Score: 23.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
○.zenodo.json file
-
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (13.3%) to scientific vocabulary
Keywords
Repository
A package for constructing confidence intervals for error rates in matching tasks such as 1:1 face and speaker verification.
Basic Info
Statistics
- Stars: 8
- Watchers: 1
- Forks: 0
- Open Issues: 2
- Releases: 0
Topics
Metadata Files
README.md
Confidence Intervals for Error Rates in :jigsaw: Matching Tasks
This repository hosts the cimat (Confidence Intervals for MAtching Tasks) package, designed to create confidence intervals for performance metrics in 1:1 matching tasks like face and speaker verification.
With cimat, you can generate confidence intervals ($C_{\alpha}$) with a confidence level of $1-\alpha$ for metrics ($\theta^*$) such as:
- False Positive Rate (FPR, aka FMR or FAR) and False Negative Rate (FNR, aka FNMR or FRR) estimates
- ROC coordinate estimates such as FNR@FPR (aka FNMR@FMR or FRR@FAR)
such that $\mathbb{P}(\theta^*\in C_{\alpha})\geq 1-\alpha$. Check out our paper for a description of the methods.
:rocket: Getting started
In order to intall the cimat package, run
pip install cimat
or
pip install git+https://github.com/awslabs/cis-matching-tasks.git
for the latest version of the package.
Test your setup using the
jumpstarter.ipynb
notebook or copying and pasting the following code that derives confidence
intervals for FNMR and FMR obtained by binarizing the similarity scores at a
given threshold on synthetic data:
```python
import json
from cimat import MTData, UncertaintyEstimator
import numpy as np
Generate embeddings (here you would import your own embeddings)
df = {id: {img: np.random.normal(id, 1, 100) for img in range(5)} for id in range(25)} # Example structure: dictionary[id][image] = embedding mt = MTData(df) mt.generatesimilarityscores() # Generate cosine similarity scores between images
Set a threshold for determining matches versus non-matches
threshold = 0.7
Instantiate the class to estimate error rates using similarity scores
Example structure: dictionary[id1][id2] = [score between image from id1 and id2]
uq = UncertaintyEstimator(scores=mt.similarity_scores)
Compute False Non-Match Rate (FNMR, aka FNR) and False Match Rate (FMR, aka FPR) based on the threshold
fnr, fpr, _ = uq.computebinerrormetrics(threshold) fnr, fpr
Calculate 95% Confidence Intervals (CI) for FNMR and FMR using Wilson's method
with a plug-in estimator of the variance
varfnr, varfpr = uq.computevariance(threshold=threshold, estimator="plugin") cifnr, cifpr = uq.getbinerrorci(threshold=threshold, varfnr=varfnr, varfpr=varfpr, alpha=0.05) cifnr, ci_fpr
with a double-or-nothing bootstrap estimator of the variance (not needed if you're doing the plug-in estimator already)
uq.runbootstrap(B=1000) # runs the bootstrap varfnrboot, varfprboot = uq.computevariance(threshold=threshold, estimator="boot") cifnrboot, cifprboot = uq.getbinerrorci(threshold=threshold, varfnr=varfnrboot, varfpr=varfprboot, alpha=0.05) cifnrboot, cifprboot ```
To generate the intervals without bothering about variance estimation, use
python
uq.get_binerror_ci(threshold = threshold, alpha = 0.05)
Under the hood, this function computes the variance with the plug-in estimator.
To obtain pointwise confidence intervals for the ROC with the double-or-nothing bootstrap, use ```python fpr, tpr, auc = uq.getroc(targetfpr=[0.01, 0.1])
you must have run the bootstrap through uq.run_bootstrap(B=1000)
citpratfnr, ciauc = uq.getrocci(targetfpr=[0.01, 0.1], alpha = 0.05) citpratfnr, ci_auc ```
See the code in the notebook for the MORPH dataset
(morph.ipynb)
for a more detailed example on how to use the package. In case of large
datasets, the computations of the uncertainty may be burdensome. The
computational speed of the functions in this package can be substantially
improved through parallelization.
We have moved all the code related to the experiments in the paper to another
branch named paper.
:books: Citation
To cite our paper/code/package, use
@article{fogliato2024confidence,
title={Confidence intervals for error rates in 1: 1 matching tasks: Critical statistical analysis and recommendations},
author={Fogliato, Riccardo and Patil, Pratik and Perona, Pietro},
journal={International Journal of Computer Vision},
pages={1--26},
year={2024},
publisher={Springer}
}
Security
See CONTRIBUTING for more information.
License
This project is licensed under the Apache-2.0 License.
Owner
- Name: Amazon Web Services - Labs
- Login: awslabs
- Kind: organization
- Location: Seattle, WA
- Website: http://amazon.com/aws/
- Repositories: 914
- Profile: https://github.com/awslabs
AWS Labs
GitHub Events
Total
- Watch event: 1
Last Year
- Watch event: 1
Issues and Pull Requests
Last synced: 9 months ago
All Time
- Total issues: 0
- Total pull requests: 2
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Total issue authors: 0
- Total pull request authors: 1
- Average comments per issue: 0
- Average comments per pull request: 0.0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 2
Past Year
- Issues: 0
- Pull requests: 1
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 1
- Average comments per issue: 0
- Average comments per pull request: 0.0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 1
Top Authors
Issue Authors
Pull Request Authors
- dependabot[bot] (4)