mmtracking-flow-guided-feature-aggregation
A revised implementation of FGFA that supports batch size >1, and some bugs were fixed!
https://github.com/direct20/mmtracking-flow-guided-feature-aggregation
Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (8.5%) to scientific vocabulary
Repository
A revised implementation of FGFA that supports batch size >1, and some bugs were fixed!
Basic Info
- Host: GitHub
- Owner: Direct20
- License: apache-2.0
- Language: Python
- Default Branch: master
- Size: 359 KB
Statistics
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
- Releases: 0
Metadata Files
README.md
Flow-Guided Feature Aggregation for Video Object Detection
This repo provides an up-to-date implementation of FGFA which is accuracy stable and supports batch size >1.
Introduction
Paper proposes a flow guided feature aggregation method for video object detection, and it has been a vital baseline for subsequent researches. However, the code attached to the paper is based on MXNet, which seldom updates anymore.
MEGA Repo provides a torch 1.3 and CUDA 9,10 series based implementation, which is too old.
MMTracking provides a nice up-to-date implementation, but we experienced some problems below:
1.We train FGFA on ImageNet VID datasets and our custom datasets, the accuracy is oddly low. We solved this problem by replacing the mmtracking's flow net with MEGA Repo's flow net, and the accuracy becomes right.
2.The batch size is force constrained to 1, which may lead to not stable training and resource waste for large memory GPU. Also, the train speed is low. We solved this probelm by re-implementing the FGFA completely and some constrains in the mmtracking framework were eliminated.
Now, This repo provides an up-to-date implementation of FGFA which is accuracy stable and supports batch size >1. This repo is based on MMTracking 0.14.0. We tested on torch 1.10 and CUDA 11.3, it works well. Other versions may also work.
Installation, Train, Inference, etc.
The revised FGFA config files are placed in configs/vid/fgfa_bm. Pretrained flow net weights is shown in Releases, download it and put it in pretrain/ folder. To use this code, you should be familiar with OpenMMLab's MMTracking.
For other details, please refer to README_mmtracking.md and MMTracking repo.
Owner
- Login: Direct20
- Kind: user
- Repositories: 1
- Profile: https://github.com/Direct20
Citation (CITATION.cff)
cff-version: 1.2.0 message: "If you use this software, please cite it as below." authors: - name: "MMTracking Contributors" title: "OpenMMLab Video Perception Toolbox and Benchmark" date-released: 2021-01-04 url: "https://github.com/open-mmlab/mmtracking" license: Apache-2.0
GitHub Events
Total
- Watch event: 1
- Push event: 5
Last Year
- Watch event: 1
- Push event: 5
Dependencies
- pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
- cython *
- numpy *
- myst_parser *
- sphinx ==4.0.2
- sphinx-copybutton *
- sphinx_markdown_tables *
- mmcls >=0.16.0,<1.0.0
- mmcv-full >=1.6.1,<1.7.0
- mmdet >=2.19.1,<3.0.0
- mmcls *
- mmcv *
- mmdet *
- torch *
- torchvision *
- attributee *
- dotty_dict *
- einops *
- lap *
- matplotlib *
- mmcls >=0.16.0,<1.0.0
- motmetrics *
- packaging *
- pandas <=1.3.5
- pycocotools *
- scipy <=1.7.3
- seaborn *
- terminaltables *
- tqdm *
- asynctest * test
- codecov * test
- flake8 * test
- interrogate * test
- isort ==4.3.21 test
- kwarray * test
- pytest * test
- ubelt * test
- xdoctest >=0.10.0 test
- yapf * test