https://github.com/google-research/localmot
Science Score: 10.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
○codemeta.json file
-
○.zenodo.json file
-
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (10.6%) to scientific vocabulary
Repository
Basic Info
- Host: GitHub
- Owner: google-research
- License: apache-2.0
- Language: Python
- Default Branch: main
- Size: 291 KB
Statistics
- Stars: 22
- Watchers: 6
- Forks: 3
- Open Issues: 1
- Releases: 0
Metadata Files
README.md
This is not an officially supported Google product.
Local Metrics for Multi-Object Tracking
Jack Valmadre, Alex Bewley, Jonathan Huang, Chen Sun, Cristian Sminchisescu, Cordelia Schmid
Google Research
Paper: https://arxiv.org/abs/2104.02631

Introduction
Metrics for Multi-Object Tracking (MOT) can be divided into strict metrics, which enforce a fixed, one-to-one correspondence between ground-truth and predicted tracks, and non-strict metrics, which award some credit for tracks that are correct in a subset of frames. IDF1 and ATA are examples of strict metrics, whereas MOTA and HOTA are examples of non-strict metrics. The type of metric which is appropriate is determined by the priorities of the application. While strict metrics are relatively uncontroversial, the design of a non-strict metric usually involves two disputable decisions: (i) how to quantify association error and (ii) how to combine detection and association metrics.
Local metrics are obtained by applying an existing strict metric locally in a sliding window. Local metrics represent an alternative way to define a non-strict metric, where the degree of strictness (that is, the balance between detection and association) is controlled via the temporal horizon of the local window. This replaces the two open questions above with the need to choose a temporal horizon. Moreover, it provides a single family of metrics that can be used for both strict and non-strict evaluation. Varying the horizon parameter enables analysis of association error with respect to temporal distance.
One historical weakness of metrics based on one-to-one track correspondence is their lack of transparency with respect to error type. That is, it can be unclear whether a reduction in overall tracking error is due to improved detection or association (or both). To address this, we develop a decomposition of overall tracking error into four components: over- and under-detection (FN det, FP det) and over- and under-association (merge, split). The error decomposition is equally applicable to local metrics.
Demo
We provide a script to evaluate predictions in MOT Challenge format.
First set up the code and dependencies:
bash
git clone https://github.com/google-research/localmot
pip install -r localmot/requirements.txt
export PYTHONPATH="$(pwd)/localmot:$PYTHONPATH"
Next download the ground-truth data:
bash
mkdir -p data/gt
( cd data/gt && \
curl -L -O -C - https://motchallenge.net/data/MOT17Labels.zip && \
unzip -od MOT17 MOT17Labels.zip )
and the predictions of several trackers:
bash
mkdir -p data/pr/MOT17
( cd data/pr/MOT17 && \
curl -L -o SORT17.zip -C - "https://motchallenge.net/download_results.php?shakey=d80cceb92629e8236c60129417830bd2fdec8025&name=SORT17&chl=10" && \
curl -L -o Tracktor++v2.zip -C - "https://motchallenge.net/download_results.php?shakey=b555a1f532c3d161f836fadc8d283fa2100f05c8&name=Tracktor++v2&chl=10" && \
curl -L -o MAT.zip -C - "https://motchallenge.net/download_results.php?shakey=3af4ae73bef6ece5564ef10931cf49773631b7eb&name=MAT&chl=10" && \
curl -L -o Fair.zip -C - "https://www.motchallenge.net/download_results.php?shakey=4a2cf604010e9994a49883db083d44ad63e8765a&name=Fair&chl=10" && \
curl -L -o Lif_T.zip -C - "https://motchallenge.net/download_results.php?shakey=aaffb4819ec6c7ebac66fd401953bf7b87e56936&name=Lif_T&chl=10" && \
curl -L -o LSST17.zip -C - "https://motchallenge.net/download_results.php?shakey=dd2c3c1819dd450538a5a57ee3f696f5ea756f7b&name=LSST17&chl=10" && \
unzip -o -d SORT17 SORT17.zip && \
unzip -o -d Tracktor++v2 Tracktor++v2.zip && \
unzip -o -d MAT MAT.zip && \
unzip -o -d Fair Fair.zip && \
unzip -o -d Lif_T Lif_T.zip && \
unzip -o -d LSST17 LSST17.zip )
To run the evaluation:
bash
python -m localmot.scripts.eval_motchallenge \
--gt_dir=data/gt/MOT17/train \
--pr_dir=data/pr/MOT17 \
--out_dir=out/MOT17/train \
--challenge=MOT17 \
--units=frames \
--nodiagnostics \
--plot
This will generate several tables and figures in out_dir, including the following plot for ALTA versus horizon (in frames).
Note that these results differ from those in the paper because they are computed on the training set.

Note that the demo script caches the results for each sequence.
Use --ignore_cache to ignore and overwrite the cached results.
Separate caches are used for each temporal unit and with/without diagnostics.
Library usage
python
import localmot
import numpy as np
import pandas as pd
Example using fixed horizons (in frames):
```python horizons = [0, 10, 100, np.inf]
stats = {} for key, seq in sequences: stats[key] = localmot.metrics.localstats( seq.numframes, seq.gtidsubset, seq.pridsubset, seq.similarity, horizons=horizons) totalstats = sum(stats.values()) metrics = localmot.metrics.normalize(totalstats) ```
Example using fixed horizons (in seconds):
```python horizons = [0, 1, 3, 10, np.inf] units = 'seconds'
stats = {} for key, seq in sequences: timescale = localmot.horizonutil.unitstotimescale( units, seq.numframes, seq.fps) stats[key] = localmot.metrics.localstats( seq.numframes, seq.gtidsubset, seq.pridsubset, seq.similarity, horizons=horizons, timescale=timescale) totalstats = sum(stats.values()) metrics = localmot.metrics.normalize(totalstats) ```
Example using the spanning range of horizons where all sequences have same length and fps:
```python num_frames = 1000 fps = 10 units = 'seconds'
timescale = localmot.horizonutil.unitstotimescale(units, numframes, fps) horizons = localmot.horizonutil.horizonrange(numframes, timescale)
stats = {} for key, seq in sequences: stats[key] = localmot.metrics.localstats( numframes, seq.gtidsubset, seq.pridsubset, seq.similarity, horizons=horizons, timescale=timescale) totalstats = sum(stats.values()) metrics = localmot.metrics.normalize(totalstats) ```
Example using the spanning range of horizons with heterogeneous sequences:
```python units = 'seconds'
stats = {} minhorizon = np.inf maxhorizon = 0 for key, seq in sequences: timescale = localmot.horizonutil.unitstotimescale( units, seq.numframes, seq.fps) horizons = localmot.horizonutil.horizonrange(seq.numframes, timescale) framehorizons = localmot.horizonutil.intframehorizon( horizons, seq.numframes, timescale) stats[key] = localmot.metrics.localstatsatinthorizons( seq.numframes, seq.gtidsubset, seq.pridsubset, seq.similarity, horizons=framehorizons) minhorizon = min(minhorizon, min(horizons)) maxhorizon = max(maxhorizon, max(horizons))
horizons = localmot.horizonutil.nicelogspace(minhorizon, maxhorizon)
extendedstats = {} for key, seq in sequences: timescale = localmot.horizonutil.unitstotimescale( units, seq.numframes, seq.fps) framehorizons = localmot.horizonutil.intframehorizon( horizons, seq.numframes, timescale) extendedstats[key] = (stats[key].loc[framehorizons] .setindex(pd.Index(horizons, name='horizon')))
totalstats = sum(extendedstats.values()) metrics = localmot.metrics.normalize(total_stats) ```
Citation
If you use these local metrics or code for your publication, please cite the following paper:
@article{valmadre2021local,
title={Local Metrics for Multi-Object Tracking},
author={Valmadre, Jack and Bewley, Alex and Huang, Jonathan and Sun, Chen and Sminchisescu, Cristian and Schmid, Cordelia},
journal={arXiv preprint arXiv:2104.02631},
year={2021}
}
Contact
For bugs or questions about the code, feel free to open an issue on github. For further queries, send an email to localmot-dev@google.com.
Owner
- Name: Google Research
- Login: google-research
- Kind: organization
- Location: Earth
- Website: https://research.google
- Repositories: 226
- Profile: https://github.com/google-research
GitHub Events
Total
- Watch event: 1
Last Year
- Watch event: 1
Committers
Last synced: 9 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Jack Valmadre | v****e@g****m | 8 |
| Jack Valmadre | j****r | 3 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 9 months ago
All Time
- Total issues: 0
- Total pull requests: 1
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Total issue authors: 0
- Total pull request authors: 1
- Average comments per issue: 0
- Average comments per pull request: 0.0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
- jvlmdr (1)
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- absl-py >=0.13.0
- matplotlib >=3.4.2
- numpy >=1.21.0
- pandas >=1.2.5
- scipy >=1.7.0