https://github.com/cheind/py-motmetrics

:bar_chart: Benchmark multiple object trackers (MOT) in Python

https://github.com/cheind/py-motmetrics

Science Score: 49.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 2 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org, zenodo.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.7%) to scientific vocabulary

Keywords

benchmark clear-mot-metrics metrics mot mot-challenge object-detection object-tracking tracker

Keywords from Contributors

linear-assignment-problem spherical-harmonics opencv
Last synced: 5 months ago · JSON representation

Repository

:bar_chart: Benchmark multiple object trackers (MOT) in Python

Basic Info
  • Host: GitHub
  • Owner: cheind
  • License: mit
  • Language: Python
  • Default Branch: develop
  • Homepage:
  • Size: 6.85 MB
Statistics
  • Stars: 1,435
  • Watchers: 22
  • Forks: 260
  • Open Issues: 63
  • Releases: 1
Topics
benchmark clear-mot-metrics metrics mot mot-challenge object-detection object-tracking tracker
Created almost 9 years ago · Last pushed about 1 year ago
Metadata Files
Readme License

Readme.md

PyPI version Build Status DOI

py-motmetrics

The py-motmetrics library provides a Python implementation of metrics for benchmarking multiple object trackers (MOT).

While benchmarking single object trackers is rather straightforward, measuring the performance of multiple object trackers needs careful design as multiple correspondence constellations can arise (see image below). A variety of methods have been proposed in the past and while there is no general agreement on a single method, the methods of [1,2,3,4] have received considerable attention in recent years. py-motmetrics implements these metrics.

![](./motmetrics/etc/mot.png)
_Pictures courtesy of Bernardin, Keni, and Rainer Stiefelhagen [[1]](#References)_

In particular py-motmetrics supports CLEAR-MOT[1,2] metrics and ID[4] metrics. Both metrics attempt to find a minimum cost assignment between ground truth objects and predictions. However, while CLEAR-MOT solves the assignment problem on a local per-frame basis, ID-MEASURE solves the bipartite graph matching by finding the minimum cost of objects and predictions over all frames. This blog-post by Ergys illustrates the differences in more detail.

Features at a glance

  • Variety of metrics
    Provides MOTA, MOTP, track quality measures, global ID measures and more. The results are comparable with the popular MOTChallenge benchmarks (*1).
  • Distance agnostic
    Supports Euclidean, Intersection over Union and other distances measures.
  • Complete event history
    Tracks all relevant per-frame events suchs as correspondences, misses, false alarms and switches.
  • Flexible solver backend
    Support for switching minimum assignment cost solvers. Supports scipy, ortools, munkres out of the box. Auto-tunes solver selection based on availability and problem size.
  • Easy to extend
    Events and summaries are utilizing pandas for data structures and analysis. New metrics can reuse already computed values from depending metrics.

Metrics

py-motmetrics implements the following metrics. The metrics have been aligned with what is reported by MOTChallenge benchmarks.

```python import motmetrics as mm

List all default metrics

mh = mm.metrics.create() print(mh.listmetricsmarkdown()) ```

| Name | Description | | :------------------- | :--------------------------------------------------------------------------------- | | numframes | Total number of frames. | | nummatches | Total number matches. | | numswitches | Total number of track switches. | | numfalsepositives | Total number of false positives (false-alarms). | | nummisses | Total number of misses. | | numdetections | Total number of detected objects including matches and switches. | | numobjects | Total number of unique object appearances over all frames. | | numpredictions | Total number of unique prediction appearances over all frames. | | numuniqueobjects | Total number of unique object ids encountered. | | mostlytracked | Number of objects tracked for at least 80 percent of lifespan. | | partiallytracked | Number of objects tracked between 20 and 80 percent of lifespan. | | mostlylost | Number of objects tracked less than 20 percent of lifespan. | | numfragmentations | Total number of switches from tracked to not tracked. | | motp | Multiple object tracker precision. | | mota | Multiple object tracker accuracy. | | precision | Number of detected objects over sum of detected and false positives. | | recall | Number of detections over number of objects. | | idfp | ID measures: Number of false positive matches after global min-cost matching. | | idfn | ID measures: Number of false negatives matches after global min-cost matching. | | idtp | ID measures: Number of true positives matches after global min-cost matching. | | idp | ID measures: global min-cost precision. | | idr | ID measures: global min-cost recall. | | idf1 | ID measures: global min-cost F1 score. | | objfrequencies | pd.Series Total number of occurrences of individual objects over all frames. | | predfrequencies | pd.Series Total number of occurrences of individual predictions over all frames. | | trackratios | pd.Series Ratio of assigned to total appearance count per unique object id. | | idglobalassignment | dict ID measures: Global min-cost assignment for ID measures. | | detaalpha | HOTA: Detection Accuracy (DetA) for a given threshold. | | assaalpha | HOTA: Association Accuracy (AssA) for a given threshold. | | hota_alpha | HOTA: Higher Order Tracking Accuracy (HOTA) for a given threshold. |

MOTChallenge compatibility

py-motmetrics produces results compatible with popular MOTChallenge benchmarks (*1). Below are two results taken from MOTChallenge Matlab devkit corresponding to the results of the CEM tracker on the training set of the 2015 MOT 2DMark.

```

TUD-Campus IDF1 IDP IDR| Rcll Prcn FAR| GT MT PT ML| FP FN IDs FM| MOTA MOTP MOTAL 55.8 73.0 45.1| 58.2 94.1 0.18| 8 1 6 1| 13 150 7 7| 52.6 72.3 54.3

TUD-Stadtmitte IDF1 IDP IDR| Rcll Prcn FAR| GT MT PT ML| FP FN IDs FM| MOTA MOTP MOTAL 64.5 82.0 53.1| 60.9 94.0 0.25| 10 5 4 1| 45 452 7 6| 56.4 65.4 56.9

```

In comparison to py-motmetrics

IDF1 IDP IDR Rcll Prcn GT MT PT ML FP FN IDs FM MOTA MOTP TUD-Campus 55.8% 73.0% 45.1% 58.2% 94.1% 8 1 6 1 13 150 7 7 52.6% 0.277 TUD-Stadtmitte 64.5% 82.0% 53.1% 60.9% 94.0% 10 5 4 1 45 452 7 6 56.4% 0.346

(*1) Besides naming conventions, the only obvious differences are

  • Metric FAR is missing. This metric is given implicitly and can be recovered by FalsePos / Frames * 100.
  • Metric MOTP seems to be off. To convert compute (1 - MOTP) * 100. MOTChallenge benchmarks compute MOTP as percentage, while py-motmetrics sticks to the original definition of average distance over number of assigned objects [1].

You can compare tracker results to ground truth in MOTChallenge format by

python -m motmetrics.apps.eval_motchallenge --help

For MOT16/17, you can run

python -m motmetrics.apps.evaluateTracking --help

Installation

To install latest development version of py-motmetrics (usually a bit more recent than PyPi below)

pip install git+https://github.com/cheind/py-motmetrics.git

Install via PyPi

To install py-motmetrics use pip

pip install motmetrics

Python 3.5/3.6/3.9 and numpy, pandas and scipy is required. If no binary packages are available for your platform and building source packages fails, you might want to try a distribution like Conda (see below) to install dependencies.

Alternatively for developing, clone or fork this repository and install in editing mode.

pip install -e <path/to/setup.py>

Install via Conda

In case you are using Conda, a simple way to run py-motmetrics is to create a virtual environment with all the necessary dependencies

``` conda env create -f environment.yml

activate motmetrics-env ```

Then activate / source the motmetrics-env and install py-motmetrics and run the tests.

activate motmetrics-env pip install . pytest

In case you already have an environment you install the dependencies from within your environment by

conda install --file requirements.txt pip install . pytest

Usage

Populating the accumulator

```python import motmetrics as mm import numpy as np

Create an accumulator that will be updated during each frame

acc = mm.MOTAccumulator(auto_id=True)

Call update once for per frame. For now, assume distances between

frame objects / hypotheses are given.

acc.update( [1, 2], # Ground truth objects in this frame [1, 2, 3], # Detector hypotheses in this frame [ [0.1, np.nan, 0.3], # Distances from object 1 to hypotheses 1, 2, 3 [0.5, 0.2, 0.3] # Distances from object 2 to hypotheses 1, 2, 3 ] ) ```

The code above updates an event accumulator with data from a single frame. Here we assume that pairwise object / hypothesis distances have already been computed. Note np.nan inside the distance matrix. It signals that object 1 cannot be paired with hypothesis 2. To inspect the current event history simple print the events associated with the accumulator.

```python print(acc.events) # a pandas DataFrame containing all events

""" Type OId HId D FrameId Event 0 0 RAW 1 1 0.1 1 RAW 1 2 NaN 2 RAW 1 3 0.3 3 RAW 2 1 0.5 4 RAW 2 2 0.2 5 RAW 2 3 0.3 6 MATCH 1 1 0.1 7 MATCH 2 2 0.2 8 FP NaN 3 NaN """ ```

The above data frame contains RAW and MOT events. To obtain just MOT events type

```python print(acc.mot_events) # a pandas DataFrame containing MOT only events

""" Type OId HId D FrameId Event 0 6 MATCH 1 1 0.1 7 MATCH 2 2 0.2 8 FP NaN 3 NaN """ ```

Meaning object 1 was matched to hypothesis 1 with distance 0.1. Similarily, object 2 was matched to hypothesis 2 with distance 0.2. Hypothesis 3 could not be matched to any remaining object and generated a false positive (FP). Possible assignments are computed by minimizing the total assignment distance (Kuhn-Munkres algorithm).

Continuing from above

```python frameid = acc.update( [1, 2], [1], [ [0.2], [0.4] ] ) print(acc.mot_events.loc[frameid])

""" Type OId HId D Event 2 MATCH 1 1 0.2 3 MISS 2 NaN NaN """ ```

While object 1 was matched, object 2 couldn't be matched because no hypotheses are left to pair with.

```python frameid = acc.update( [1, 2], [1, 3], [ [0.6, 0.2], [0.1, 0.6] ] ) print(acc.mot_events.loc[frameid])

""" Type OId HId D Event 4 MATCH 1 1 0.6 5 SWITCH 2 3 0.6 """ ```

Object 2 is now tracked by hypothesis 3 leading to a track switch. Note, although a pairing (1, 3) with cost less than 0.6 is possible, the algorithm prefers prefers to continue track assignments from past frames which is a property of MOT metrics.

Computing metrics

Once the accumulator has been populated you can compute and display metrics. Continuing the example from above

```python mh = mm.metrics.create() summary = mh.compute(acc, metrics=['num_frames', 'mota', 'motp'], name='acc') print(summary)

""" num_frames mota motp acc 3 0.5 0.34 """ ```

Computing metrics for multiple accumulators or accumulator views is also possible

```python summary = mh.computemany( [acc, acc.events.loc[0:1]], metrics=['numframes', 'mota', 'motp'], names=['full', 'part']) print(summary)

""" num_frames mota motp full 3 0.5 0.340000 part 2 0.5 0.166667 """ ```

Finally, you may want to reformat column names and how column values are displayed.

```python strsummary = mm.io.render_summary( summary, formatters={'mota' : '{:.2%}'.format}, namemap={'mota': 'MOTA', 'motp' : 'MOTP'} ) print(strsummary)

""" num_frames MOTA MOTP full 3 50.00% 0.340000 part 2 50.00% 0.166667 """ ```

For MOTChallenge py-motmetrics provides predefined metric selectors, formatters and metric names, so that the result looks alike what is provided via their Matlab devkit.

```python summary = mh.computemany( [acc, acc.events.loc[0:1]], metrics=mm.metrics.motchallengemetrics, names=['full', 'part'])

strsummary = mm.io.rendersummary( summary, formatters=mh.formatters, namemap=mm.io.motchallengemetric_names ) print(strsummary)

""" IDF1 IDP IDR Rcll Prcn GT MT PT ML FP FN IDs FM MOTA MOTP full 83.3% 83.3% 83.3% 83.3% 83.3% 2 1 1 0 1 1 1 1 50.0% 0.340 part 75.0% 75.0% 75.0% 75.0% 75.0% 2 1 1 0 1 1 0 0 50.0% 0.167 """ ```

In order to generate an overall summary that computes the metrics jointly over all accumulators add generate_overall=True as follows

```python summary = mh.computemany( [acc, acc.events.loc[0:1]], metrics=mm.metrics.motchallengemetrics, names=['full', 'part'], generate_overall=True )

strsummary = mm.io.rendersummary( summary, formatters=mh.formatters, namemap=mm.io.motchallengemetric_names ) print(strsummary)

""" IDF1 IDP IDR Rcll Prcn GT MT PT ML FP FN IDs FM MOTA MOTP full 83.3% 83.3% 83.3% 83.3% 83.3% 2 1 1 0 1 1 1 1 50.0% 0.340 part 75.0% 75.0% 75.0% 75.0% 75.0% 2 1 1 0 1 1 0 0 50.0% 0.167 OVERALL 80.0% 80.0% 80.0% 80.0% 80.0% 4 2 2 0 2 2 1 1 50.0% 0.275 """ ```

[Underdeveloped] Computing HOTA metrics

Computing HOTA metrics is also possible. However, it cannot be used with the Accumulator class directly, as HOTA requires to computing a reweighting matrix from all the frames at the beginning. Here is an example of how to use it:

```python import os import numpy as np import motmetrics as mm

def computemotchallenge(dirname): # gt.txt and test.txt should be prepared in MOT15 format dfgt = mm.io.loadtxt(os.path.join(dirname, "gt.txt")) dftest = mm.io.loadtxt(os.path.join(dirname, "test.txt")) # Require different thresholds for matching thlist = np.arange(0.05, 0.99, 0.05) reslist = mm.utils.comparetogroundtruthreweighting(dfgt, dftest, "iou", distth=thlist) return res_list

data_dir is the directory containing the gt.txt and test.txt files

acc = computemotchallenge("datadir") mh = mm.metrics.create()

summary = mh.computemany( acc, metrics=[ "detaalpha", "assaalpha", "hotaalpha", ], generateoverall=True, # Overall is the average we need only ) strsummary = mm.io.rendersummary( summary.iloc[[-1], :], # Use list to preserve DataFrame type formatters=mh.formatters, namemap={"hotaalpha": "HOTA", "assaalpha": "ASSA", "deta_alpha": "DETA"}, ) print(strsummary) """

data_dir=motmetrics/data/TUD-Campus

     DETA  ASSA  HOTA

OVERALL 41.8% 36.9% 39.1%

data_dir=motmetrics/data/TUD-Stadtmitte

     DETA  ASSA  HOTA

OVERALL 39.2% 40.9% 39.8% """ ```

Computing distances

Up until this point we assumed the pairwise object/hypothesis distances to be known. Usually this is not the case. You are mostly given either rectangles or points (centroids) of related objects. To compute a distance matrix from them you can use motmetrics.distance module as shown below.

Euclidean norm squared on points

```python

Object related points

o = np.array([ [1., 2], [2., 2], [3., 2], ])

Hypothesis related points

h = np.array([ [0., 0], [1., 1], ])

C = mm.distances.norm2squaredmatrix(o, h, maxd2=5.)

""" [[ 5. 1.] [ nan 2.] [ nan 5.]] """ ```

Intersection over union norm for 2D rectangles

```python a = np.array([ [0, 0, 1, 2], # Format X, Y, Width, Height [0, 0, 0.8, 1.5], ])

b = np.array([ [0, 0, 1, 2], [0, 0, 1, 1], [0.1, 0.2, 2, 2], ]) mm.distances.ioumatrix(a, b, maxiou=0.5)

""" [[ 0. 0.5 nan] [ 0.4 0.42857143 nan]] """ ```

Solver backends

For large datasets solving the minimum cost assignment becomes the dominant runtime part. py-motmetrics therefore supports these solvers out of the box

  • lapsolver - https://github.com/cheind/py-lapsolver
  • lapjv - https://github.com/gatagat/lap
  • scipy - https://github.com/scipy/scipy/tree/master/scipy
  • ortools<9.4 - https://github.com/google/or-tools
  • munkres - http://software.clapper.org/munkres/

A comparison for different sized matrices is shown below (taken from here)

Please note that the x-axis is scaled logarithmically. Missing bars indicate excessive runtime or errors in returned result.

By default py-motmetrics will try to find a LAP solver in the order of the list above. In order to temporarly replace the default solver use

```python costs = ... mysolver = lambda x: ... # solver code that returns pairings

with lap.setdefaultsolver(mysolver): ... ```

For custom dataset

Use this section as a guide for calculating MOT metrics for your custom dataset.

Before you begin, make sure to have Ground truth and your Tracker output data in the form of text files. The code below assumes MOT16 format for the ground truth as well as the tracker ouput. The data is arranged in the following sequence:

<frame number>, <object id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, <confidence>, <x>, <y>, <z>

A sample ground truth/tracker output file is shown below. If you are using a custom dataset, then it is highly likely that you will have to create your own ground truth file. If you already have a MOT16 format ground truth file, you can use it directly otherwise, you will need a MOT16 annotator tool to create the annotations (ground truth). You can use any tool to create your ground truth data, just make sure it is as per MOT16 format.

If you can't find a tool to create your ground truth files, you can use this free MOT16 annotator tool to create ground truth for your dataset which can then be used in conjunction with your tracker output to generate the MOT metrics.

1,1,763.00,272.00,189.00,38.00,1,-1,-1,-1 1,2,412.00,265.00,153.00,30.00,1,-1,-1,-1 2,1,762.00,269.00,185.00,41.00,1,-1,-1,-1 2,2,413.00,267.00,151.00,26.00,1,-1,-1,-1 3,1,760.00,272.00,186.00,38.00,1,-1,-1,-1

You can read more about MOT16 format here.

Following function loads the ground truth and tracker object files, processes them and produces a set of metrices.

```python def motMetricsEnhancedCalculator(gtSource, tSource): # import required packages import motmetrics as mm import numpy as np

# load ground truth gt = np.loadtxt(gtSource, delimiter=',')

# load tracking output t = np.loadtxt(tSource, delimiter=',')

# Create an accumulator that will be updated during each frame acc = mm.MOTAccumulator(auto_id=True)

# Max frame number maybe different for gt and t files for frame in range(int(gt[:,0].max())): frame += 1 # detection and frame numbers begin at 1

# select id, x, y, width, height for current frame
# required format for distance calculation is X, Y, Width, Height \
# We already have this format
gt_dets = gt[gt[:,0]==frame,1:6] # select all detections in gt
t_dets = t[t[:,0]==frame,1:6] # select all detections in t

C = mm.distances.iou_matrix(gt_dets[:,1:], t_dets[:,1:], \
                            max_iou=0.5) # format: gt, t

# Call update once for per frame.
# format: gt object ids, t object ids, distance
acc.update(gt_dets[:,0].astype('int').tolist(), \
          t_dets[:,0].astype('int').tolist(), C)

mh = mm.metrics.create()

summary = mh.compute(acc, metrics=['numframes', 'idf1', 'idp', 'idr', \ 'recall', 'precision', 'numobjects', \ 'mostlytracked', 'partiallytracked', \ 'mostlylost', 'numfalsepositives', \ 'nummisses', 'numswitches', \ 'numfragmentations', 'mota', 'motp' \ ], \ name='acc')

strsummary = mm.io.rendersummary( summary, #formatters={'mota' : '{:.2%}'.format}, namemap={'idf1': 'IDF1', 'idp': 'IDP', 'idr': 'IDR', 'recall': 'Rcll', \ 'precision': 'Prcn', 'numobjects': 'GT', \ 'mostlytracked' : 'MT', 'partiallytracked': 'PT', \ 'mostlylost' : 'ML', 'numfalsepositives': 'FP', \ 'nummisses': 'FN', 'numswitches' : 'IDsw', \ 'numfragmentations' : 'FM', 'mota': 'MOTA', 'motp' : 'MOTP', \ } ) print(strsummary) ```

Run the function by pointing to the ground truth and tracker output file. A sample output is shown below.

```python

Calculate the MOT metrics

motMetricsEnhancedCalculator('gt/groundtruth.txt', \ 'to/trackeroutput.txt') """ num_frames IDF1 IDP IDR Rcll Prcn GT MT PT ML FP FN IDsw FM MOTA MOTP acc 150 0.75 0.857143 0.666667 0.743295 0.955665 261 0 2 0 9 67 1 12 0.704981 0.244387 """ ```

Running tests

py-motmetrics uses the pytest framework. To run the tests, simply cd into the source directly and run pytest.

References

  1. Bernardin, Keni, and Rainer Stiefelhagen. "Evaluating multiple object tracking performance: the CLEAR MOT metrics." EURASIP Journal on Image and Video Processing 2008.1 (2008): 1-10.
  2. Milan, Anton, et al. "Mot16: A benchmark for multi-object tracking." arXiv preprint arXiv:1603.00831 (2016).
  3. Li, Yuan, Chang Huang, and Ram Nevatia. "Learning to associate: Hybridboosted multi-target tracker for crowded scene." Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009.
  4. Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking. E. Ristani, F. Solera, R. S. Zou, R. Cucchiara and C. Tomasi. ECCV 2016 Workshop on Benchmarking Multi-Target Tracking.

Docker

Update ground truth and test data:

/data/train directory should contain MOT 2D 2015 Ground Truth files. /data/test directory should contain your results.

You can check usage and directory listing at https://github.com/cheind/py-motmetrics/blob/master/motmetrics/apps/eval_motchallenge.py

Build Image

docker build -t desired-image-name -f Dockerfile .

Run Image

docker run desired-image-name

(credits to christosavg)

License

``` MIT License

Copyright (c) 2017-2022 Christoph Heindl Copyright (c) 2018 Toka Copyright (c) 2019-2022 Jack Valmadre

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ```

Owner

  • Name: Christoph Heindl
  • Login: cheind
  • Kind: user
  • Location: Austrian area

I am a computer scientist working at the interface of perception, robotics and deep learning.

GitHub Events

Total
  • Issues event: 8
  • Release event: 1
  • Watch event: 74
  • Issue comment event: 7
  • Push event: 2
  • Pull request event: 2
  • Fork event: 6
Last Year
  • Issues event: 8
  • Release event: 1
  • Watch event: 74
  • Issue comment event: 7
  • Push event: 2
  • Pull request event: 2
  • Fork event: 6

Committers

Last synced: over 1 year ago

All Time
  • Total Commits: 334
  • Total Committers: 23
  • Avg Commits per committer: 14.522
  • Development Distribution Score (DDS): 0.398
Past Year
  • Commits: 2
  • Committers: 2
  • Avg Commits per committer: 1.0
  • Development Distribution Score (DDS): 0.5
Top Committers
Name Email Commits
Christoph Heindl c****l@g****m 201
Jack Valmadre v****e@g****m 85
Heindl Christoph c****d@p****t 20
Jack Valmadre j****r 8
Jiri Borovec j****c@s****z 2
Emily c****s 1
Lihi Gur-Arie, PhD 6****e 1
Michael Hoss 3****s 1
Urwa Muaz 4****a 1
angelcarro 6****o 1
whizmo 6****o 1
Alexander Litzenberger a****x@a****i 1
Alexander Litzenberger c****9@g****m 1
Ardeshir Shojaeinasab a****r@g****m 1
Christos Avgerinos c****n@g****m 1
Hanzhi Zhou h****3@1****m 1
Helicopt f****8@1****m 1
Håkan Ardö h****n@d****g 1
Justin Ruan j****9@g****m 1
Khalid Waleed k****d@h****m 1
Martha m****9@g****m 1
Matěj Šmíd m@m****z 1
shensheng27 4****5@q****m 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 102
  • Total pull requests: 27
  • Average time to close issues: 4 months
  • Average time to close pull requests: 4 months
  • Total issue authors: 90
  • Total pull request authors: 22
  • Average comments per issue: 3.41
  • Average comments per pull request: 4.15
  • Merged pull requests: 14
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 7
  • Pull requests: 2
  • Average time to close issues: 11 days
  • Average time to close pull requests: about 17 hours
  • Issue authors: 7
  • Pull request authors: 1
  • Average comments per issue: 0.29
  • Average comments per pull request: 0.5
  • Merged pull requests: 1
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • mikel-brostrom (5)
  • cheind (3)
  • xifen523 (2)
  • ssbilakeri (2)
  • andreamaral99 (2)
  • mcjqwer (2)
  • takehiro-code (2)
  • jvlmdr (2)
  • aia39 (1)
  • bochinski (1)
  • gustavovaliati (1)
  • fcakyon (1)
  • Instantnoodles-madman (1)
  • MacBlub (1)
  • amm272 (1)
Pull Request Authors
  • michael-hoss (2)
  • mikel-brostrom (2)
  • angelcarro (2)
  • alexlitz (2)
  • Sentient07 (2)
  • Justin900429 (2)
  • toshi-k (2)
  • cinabars (2)
  • AmetistDrake (2)
  • sirvinedev (2)
  • Lihi-Gur-Arie (1)
  • Rusteam (1)
  • bmetge (1)
  • Itto1992 (1)
  • muaz-urwa (1)
Top Labels
Issue Labels
enhancement (3) help wanted (1) question (1)
Pull Request Labels

Packages

  • Total packages: 4
  • Total downloads:
    • pypi 194,514 last-month
  • Total docker downloads: 1,340
  • Total dependent packages: 13
    (may contain duplicates)
  • Total dependent repositories: 398
    (may contain duplicates)
  • Total versions: 19
  • Total maintainers: 2
pypi.org: motmetrics

Metrics for multiple object tracker benchmarking.

  • Versions: 9
  • Dependent Packages: 13
  • Dependent Repositories: 398
  • Downloads: 194,514 Last month
  • Docker Downloads: 1,340
Rankings
Dependent repos count: 0.7%
Dependent packages count: 1.3%
Downloads: 1.3%
Average: 1.8%
Stargazers count: 1.9%
Docker downloads count: 2.3%
Forks count: 3.3%
Maintainers (1)
Last synced: 5 months ago
proxy.golang.org: github.com/cheind/py-motmetrics
  • Versions: 8
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent packages count: 6.5%
Average: 6.7%
Dependent repos count: 7.0%
Last synced: 6 months ago
spack.io: py-motmetrics

The py-motmetrics library provides a Python implementation of metrics for benchmarking multiple object trackers (MOT).

  • Versions: 1
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent repos count: 0.0%
Forks count: 6.3%
Stargazers count: 6.8%
Average: 17.6%
Dependent packages count: 57.3%
Maintainers (1)
Last synced: 6 months ago
conda-forge.org: motmetrics
  • Versions: 1
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Forks count: 10.5%
Stargazers count: 10.8%
Average: 26.6%
Dependent repos count: 34.0%
Dependent packages count: 51.2%
Last synced: 6 months ago

Dependencies

requirements.txt pypi
  • enum34 *
  • numpy >=1.12.1
  • pandas >=0.23.1
  • scipy >=0.19.0
  • xmltodict >=0.12.0
requirements_dev.txt pypi
  • flake8 *
  • flake8-import-order *
  • pytest *
  • pytest-benchmark *
.github/workflows/python-package.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v3 composite
environment.yml conda
  • numpy
  • pandas
  • pip
  • python 3.6.*
  • scipy
Dockerfile docker
  • ubuntu latest build