ppal

[CVPR 2024] Plug and Play Active Learning for Object Detection

https://github.com/chenhongyiyang/ppal

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (10.6%) to scientific vocabulary

Keywords

active-learning object-detection
Last synced: 6 months ago · JSON representation ·

Repository

[CVPR 2024] Plug and Play Active Learning for Object Detection

Basic Info
  • Host: GitHub
  • Owner: ChenhongyiYang
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 2.62 MB
Statistics
  • Stars: 97
  • Watchers: 5
  • Forks: 13
  • Open Issues: 17
  • Releases: 0
Topics
active-learning object-detection
Created over 3 years ago · Last pushed almost 2 years ago
Metadata Files
Readme License Citation

README.md

Plug and Play Active Learning for Object Detection

PyTorch implementation of our paper: Plug and Play Active Learning for Object Detection

Requirements

  • Our codebase is built on top of MMDetection, which can be installed following the offcial instuctions.

Usage

Installation

shell python setup.py install

Setup dataset

  • Place your dataset as the following structure (Only vital files are shown). It should be easy because it's the default MMDetection data placement) PPAL | `-- data | |--coco | | | |--train2017 | |--val2017 | `--annotations | | | |--instances_train2017.json | `--instances_val2017.json `-- VOCdevkit | |--VOC2007 | | | |--ImageSets | |--JPEGImages | `--Annotations `--VOC2012 |--ImageSets |--JPEGImages `--Annotations
  • For convenience, we use COCO style annotation for Pascal VOC active learning. Please download trainval_0712.json.
  • Set up active learning datasets shell zsh tools/al_data/data_setup.sh /path/to/trainval_0712.json
  • The above command will set up a new Pascal VOC data folder. It will also generate three different active learning initial annotations for both dataset, where the COCO initial sets contain 2% of the original annotated images, and the Pascal VOC initial sets contains 5% of the original annotated images.
  • The resulted file structure is as following PPAL | `-- data | |--coco | | | |--train2017 | |--val2017 | `--annotations | | | |--instances_train2017.json | `--instances_val2017.json |--VOCdevkit | | | |--VOC2007 | | | | | |--ImageSets | | |--JPEGImages | | `--Annotations | `--VOC2012 | |--ImageSets | |--JPEGImages | `--Annotations |--VOC0712 | | | |--images | |--annotations | | | `--trainval_0712.json `--active_learning | |--coco | | | |--coco_2365_labeled_1.json | |--coco_2365_unlabeled_1.json | |--coco_2365_labeled_2.json | |--coco_2365_unlabeled_2.json | |--coco_2365_labeled_3.json | `--coco_2365_unlabeled_3.json `--voc | |--voc_827_labeled_1.json |--voc_827_unlabeled_1.json |--voc_827_labeled_2.json |--voc_827_unlabeled_2.json |--voc_827_labeled_3.json `--voc_827_unlabeled_3.json
  • Please refer to data_setup.sh and createaldataset.py to generate you own active learning annotation. ### Run active learning
  • You can run active learning using a single command with a config file. For example, you can run COCO and Pascal VOC RetinaNet experiments by shell python tools/run_al_coco.py --config al_configs/coco/ppal_retinanet_coco.py --model retinanet python tools/run_al_voc.py --config al_configs/voc/ppal_retinanet_voc.py --model retinanet
  • Please check the config file to set up the data paths and environment settings before running the experiments. ## Citation

@InProceedings{yang2024ppal, author = {{Yang, Chenhongyi and Huang, Lichao and Crowley, Elliot J.}}, title = {{Plug and Play Active Learning for Object Detection}}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2024} }

Owner

  • Name: Chenhongyi Yang
  • Login: ChenhongyiYang
  • Kind: user
  • Location: Zurich, Switzerland
  • Company: Meta

Research Scientist at Meta Reality Labs

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
  - name: "MMDetection Contributors"
title: "OpenMMLab Detection Toolbox and Benchmark"
date-released: 2018-08-22
url: "https://github.com/open-mmlab/mmdetection"
license: Apache-2.0

GitHub Events

Total
  • Issues event: 5
  • Watch event: 19
  • Issue comment event: 15
  • Fork event: 9
Last Year
  • Issues event: 5
  • Watch event: 19
  • Issue comment event: 15
  • Fork event: 9

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 3
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 3
  • Total pull request authors: 0
  • Average comments per issue: 0.0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 3
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 3
  • Pull request authors: 0
  • Average comments per issue: 0.0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • Ab-34 (2)
  • wzhuo2022 (2)
  • ZhenboZhao77 (1)
  • Y-T-G (1)
  • Zihan-Liu-westlake (1)
  • upstream001 (1)
  • mburges-cvl (1)
  • sangeethnrs (1)
  • Oussamayousre (1)
  • geek-APTX4869 (1)
  • TaiDuc1001 (1)
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Dependencies

requirements/build.txt pypi
  • cython *
  • numpy *
requirements/docs.txt pypi
  • docutils ==0.16.0
  • recommonmark *
  • sphinx ==4.0.2
  • sphinx-copybutton *
  • sphinx_markdown_tables *
  • sphinx_rtd_theme ==0.5.2
requirements/mminstall.txt pypi
  • mmcv-full >=1.3.17
requirements/optional.txt pypi
  • cityscapesscripts *
  • imagecorruptions *
  • scipy *
  • sklearn *
requirements/readthedocs.txt pypi
  • mmcv *
  • torch *
  • torchvision *
requirements/runtime.txt pypi
  • matplotlib *
  • numpy *
  • pycocotools *
  • six *
  • terminaltables *
requirements/tests.txt pypi
  • asynctest * test
  • codecov * test
  • flake8 * test
  • interrogate * test
  • isort ==4.3.21 test
  • kwarray * test
  • onnx ==1.7.0 test
  • onnxruntime >=1.8.0 test
  • pytest * test
  • ubelt * test
  • xdoctest >=0.10.0 test
  • yapf * test
requirements.txt pypi
setup.py pypi