mmrotate-ssood

This repository focuses on Semi-Supervised Oriented Object Detection

https://github.com/haru-zt/mmrotate-ssood

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org, ieee.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.1%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

This repository focuses on Semi-Supervised Oriented Object Detection

Basic Info
  • Host: GitHub
  • Owner: Haru-zt
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Size: 4.6 MB
Statistics
  • Stars: 2
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created about 1 year ago · Last pushed 9 months ago
Metadata Files
Readme Contributing License Citation

README.md

mmrotate-SSOOD: Simplified Framework for Semi-Supervised Oriented Object Detection

Introduction

mmrotate-SSOOD is a simplified, modular, and flexible framework designed specifically for Semi-Supervised Oriented Object Detection (SSOOD) tasks.

Support New Method!!!

We now support Denser Teacher(TCSVT 2025)(https://ieeexplore.ieee.org/document/10802941)!

Key Features

Our framework offers the following advantages:

  • Simplified Implementation: Implementing custom semi-supervised detection methods is straightforward. You only need to modify two key functions, allowing faster experimentation and development.
  • Flexible Data Augmentation: Built upon MMCV, our framework supports seamless integration of custom data augmentation techniques. We also provide ready-to-use augmentation configs for fair comparisons with prior works.
  • Dataset Splitting Tools: Easily split your dataset into labeled and unlabeled subsets using our user-friendly tools, saving time on data preparation for semi-supervised learning.
  • Extensible Method Support: Currently, we support Denser Teacher(TCSVT 2025)(https://ieeexplore.ieee.org/document/10802941), SOOD(CVPR 2023)(https://arxiv.org/abs/2304.04515) and Dense Teacher(ECCV 2022)(https://arxiv.org/abs/2207.02541), with plans to add more state-of-the-art semi-supervised learning methods in future updates.

Future Methods Support

We plan to continually update this framework to include more state-of-the-art semi-supervised learning methods. Here are some of the methods we aim to support in future updates: 1. Soft Teacher(ICCV 2021) (End-to-End Semi-Supervised Object Detection with Soft Teacher)(https://arxiv.org/abs/2106.09018) A famous semi-supervised object detection method that uses teacher-student architecture with pseudo-label refinement for better performance.

  1. ARSL(CVPR 2023) (Ambiguity-Resistant Semi-Supervised Learning for Dense Object Detection)(https://arxiv.org/abs/2303.14960) Focuses on ambiguities in semi-supervised object detection.

Requirements

To ensure compatibility, please install the following dependencies:

1. PyTorch

  • PyTorch: 1.13.x
    We recommend PyTorch 1.13.x as all modules have been tested with this version. Installation guide: PyTorch.org

2. MMDetection

  • MMDetection: 3.0.0
    MMDetection serves as the base object detection framework. Refer to the MMDetection documentation for installation instructions.

3. MMPretrain

Notes:

  • CUDA Compatibility: Make sure all dependencies match your system's CUDA version for proper GPU acceleration. Check the PyTorch documentation for compatibility.
  • Virtual Environment: For a cleaner setup, we highly recommend using a virtual environment like conda or venv.

Installation Example:

Heres a quick guide to set up the environment: ```bash

Create a virtual environment

conda create -n ssood python=3.10 conda activate ssood

Install PyTorch

Install mmdet

pip install -U openmim mim install mmengine mim install "mmcv==2.0.0" mim install mmdet==3.0.0

Install mmpretrain

mim install mmpretrain==1.1.0 pip install future tensorboard pip install -v -e . ```

Data Preparation

Please refer to data_preparation.md to prepare the original data. After that, the data folder should be organized as follows:

data split_ss_dota1_5 train images annfiles val images annfiles test images annfiles

For partial labeled setting, we split the DOTA-v1.5's train set via the author released split data list and split tool:

angular2html python tools/SSOD/split_dota1.5_via_lists.py

For fully labeled setting, we use DOTA-V1.5 train as labeled set and DOTA-V1.5 test as unlabeled set.

After that, the data folder should be organized as follows:

data split_ss_dota1_5 train images annfiles train_10_labeled images annfiles train_10_unlabeled images annfiles train_20_labeled images annfiles train_20_unlabeled images annfiles train_30_labeled images annfiles train_30_unlabeled images annfiles val images annfiles test images annfiles

For DOTAv1.0, the preparation is the same with DOTAv1.5.

Training

For Denser Teacher - To train Denser Teacher with 10% labeled data, run: CUDA_VISIBLE_DEVICES=0,1 PORT=29501 bash ./tools/dist_train.sh configs/rotated_denser_teacher/rotated-denser-teacher_2xb3-180000k_semi-0.1-dotav1.5.py 2

Results

DOTA1.5

SOOD

| Backbone | Setting | mAP50 | mAP50 in Paper | Mem (GB) | Config | | :----------------------: | :-----: | :---: | :-------------: | :------: | :-------------------------------------------------------------: | | ResNet50 (1024,1024,200) | 10% | 47.93 | 48.63 | 8.45 | config | | ResNet50 (1024,1024,200) | 20% | | 55.58 | | config | | ResNet50 (1024,1024,200) | 30% | | 59.23 | | config |

Dense Teacher | Backbone | Setting | mAP50 | mAP50 in Paper | Mem (GB) | Config | | :----------------------: | :-----: | :---: | :-------------: | :------: | :-------------------------------------------------------------: | | ResNet50 (1024,1024,200) | 10% | 47.10 | - | | config | | ResNet50 (1024,1024,200) | 20% | | | | config | | ResNet50 (1024,1024,200) | 30% | | | | config |

Denser Teacher
| Backbone | Setting | mAP50 | Mem (GB) | Config | |----------------------------|-------------|-----------|--------------|-----------------------------------------------------------------------------| | ResNet50 (1024,1024,200) | 1% | 20.98 | | config | | ResNet50 (1024,1024,200) | 5% | 43.40 | | config | | ResNet50 (1024,1024,200) | 10% | 52.05 | | config | | ResNet50 (1024,1024,200) | 20% | 57.49 | | config | | ResNet50 (1024,1024,200) | 30% | 60.40 | | config |

DOTAv1.0

Denser Teacher
| Backbone | Setting | mAP50 | Mem (GB) | Config | |----------------------------|-------------|-----------|--------------|-----------------------------------------------------------------------------| | ResNet50 (1024,1024,200) | 1% | 19.45 | | config | | ResNet50 (1024,1024,200) | 5% | 45.84 | | config | | ResNet50 (1024,1024,200) | 10% | 52062 | | config | | ResNet50 (1024,1024,200) | 20% | 59.20 | | config | | ResNet50 (1024,1024,200) | 30% | 62.82 | | config |

Acknowledgement

This repo is built upon mmrotate. The implementation of SOOD is based on SOOD. The implementation of Dense Teacher is based on Dense Teacher. Thanks for their open source code.

Owner

  • Name: Haru
  • Login: Haru-zt
  • Kind: user

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
  - name: "MMRotate Contributors"
title: "OpenMMLab rotated object detection toolbox and benchmark"
date-released: 2022-02-18
url: "https://github.com/open-mmlab/mmrotate"
license: Apache-2.0

GitHub Events

Total
  • Watch event: 3
  • Push event: 4
Last Year
  • Watch event: 3
  • Push event: 4

Dependencies

.github/workflows/lint.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
.github/workflows/merge_stage_test.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
  • codecov/codecov-action v1.0.14 composite
.github/workflows/pr_stage_test.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
.github/workflows/publish-to-pypi.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v1 composite
.github/workflows/test_mim.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
.circleci/docker/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
docker/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
docker/serve/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
requirements/build.txt pypi
  • cython *
  • numpy *
requirements/docs.txt pypi
  • docutils ==0.16.0
  • myst-parser *
  • sphinx ==4.0.2
  • sphinx-copybutton *
  • sphinx_markdown_tables *
  • sphinx_rtd_theme ==0.5.2
requirements/mminstall.txt pypi
  • mmcv >=2.0.0rc2,<2.1.0
  • mmdet >=3.0.0rc2,<3.2.0
  • mmengine >=0.1.0
requirements/optional.txt pypi
  • imagecorruptions *
  • scikit-learn *
  • scipy *
requirements/readthedocs.txt pypi
  • e2cnn *
  • mmcv >=2.0.0rc2
  • mmdet >=3.0.0rc2
  • mmengine >=0.1.0
  • torch *
  • torchvision *
requirements/runtime.txt pypi
  • matplotlib *
  • numpy *
  • pycocotools *
  • six *
  • terminaltables *
  • torch *
requirements/tests.txt pypi
  • asynctest * test
  • codecov * test
  • coverage * test
  • cython * test
  • flake8 * test
  • interrogate * test
  • isort ==4.3.21 test
  • kwarray * test
  • matplotlib * test
  • parameterized * test
  • pytest * test
  • scikit-learn * test
  • ubelt * test
  • wheel * test
  • xdoctest >=0.10.0 test
  • yapf * test
requirements.txt pypi
setup.py pypi