star-mmrotate

[TPAMI] Oriented object detection on STAR dataset.

https://github.com/visionxlab/star-mmrotate

Science Score: 36.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.7%) to scientific vocabulary

Keywords

dataset oriented-object-detection scene-graph-generation
Last synced: 6 months ago · JSON representation

Repository

[TPAMI] Oriented object detection on STAR dataset.

Basic Info
Statistics
  • Stars: 76
  • Watchers: 4
  • Forks: 4
  • Open Issues: 5
  • Releases: 0
Topics
dataset oriented-object-detection scene-graph-generation
Created over 1 year ago · Last pushed about 1 year ago
Metadata Files
Readme License Citation

README.md

STAR: A First-Ever Dataset and A Large-Scale Benchmark for Scene Graph Generation in Large-Size Satellite Imagery (TPAMI)

The official implementation of the oriented object detection part of the paper "STAR: A First-Ever Dataset and A Large-Scale Benchmark for Scene Graph Generation in Large-Size Satellite Imagery".

Highlights

TL;DR: We propose STAR, the first large-scale dataset for scene graph generation in large-size VHR SAI. Containing more than 210,000 objects and over 400,000 triplets across 1,273 complex scenarios globally.

https://private-user-images.githubusercontent.com/29257168/345304070-0d1b8726-5a46-4182-95b9-bc70a050e49b.mp4

Abstract

Scene graph generation (SGG) in satellite imagery (SAI) benefits promoting understanding of geospatial scenarios from perception to cognition. In SAI, objects exhibit great variations in scales and aspect ratios, and there exist rich relationships between objects (even between spatially disjoint objects), which makes it attractive to holistically conduct SGG in large-size very-high-resolution (VHR) SAI. However, there lack such SGG datasets. Due to the complexity of large-size SAI, mining triplets heavily relies on long-range contextual reasoning. Consequently, SGG models designed for small-size natural imagery are not directly applicable to large-size SAI. This paper constructs a large-scale dataset for SGG in large-size VHR SAI with image sizes ranging from 512 768 to 27,860 31,096 pixels, named STAR (Scene graph generaTion in lArge-size satellite imageRy), encompassing over 210K objects and over 400K triplets. To realize SGG in large-size SAI, we propose a context-aware cascade cognition (CAC) framework to understand SAI regarding object detection (OBD), pair pruning and relationship prediction for SGG. We also release a SAI-oriented SGG toolkit with about 30 OBD and 10 SGG methods which need further adaptation by our devised modules on our challenging STAR dataset. The dataset and toolkit are available at: https://linlin-dev.github.io/project/STAR.

scatter

Usage

More instructions on installation, pretrained models, training and evaluation, please refer to MMRotate 0.3.4.

  • Clone this repo:

bash git clone https://github.com/yangxue0827/STAR-MMRotate cd STAR-MMRotate/

  • Create a conda virtual environment and activate it:

bash conda create -n STAR-MMRotate python=3.8 -y conda activate STAR-MMRotate

  • Install Pytorch:

bash pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu117

  • Install requirements:

```bash pip install openmim mim install mmcv-full mim install mmdet

cd mmrotate pip install -r requirements/build.txt pip install -v -e .

pip install timm pip install ipdb

# Optional, only for G-Rep git clone git@github.com:KinglittleQ/torch-batch-svd.git cd torch-batch-svd/ python setup.py install ```

Released Models

Oriented Object Detection

| Detector | mAP | Configs | Download | Note | | :--------: |:---:|:-------:|:--------:|:----:| | Deformable DETR | 17.1 | deformabledetrr501xstar | log | ckpt | | ARS-DETR | 28.1 | dnarwarmarcslrdetrr501x_star | log | ckpt | | RetinaNet | 21.8 | rotatedretinanethbbr50fpn1xstar_oc | log | ckpt | | ATSS | 20.4 | rotatedatsshbbr50fpn1xstar_oc | log | ckpt | | KLD | 25.0 | rotatedretinanethbbkldr50fpn1xstaroc | log | ckpt | | GWD | 25.3 | rotatedretinanethbbgwdr50fpn1xstaroc | log | ckpt | | KFIoU | 25.5 | rotatedretinanethbbkfiour50fpn1xstaroc | log | ckpt | | DCFL | 29.0 | dcflr50fpn1xstar_le135 | log | ckpt | | R3Det | 23.7 | r3detr50fpn1xstar_oc | log | ckpt | | S2A-Net | 27.3 | s2anetr50fpn1xstar_le135 | log | ckpt | | FCOS | 28.1 | rotatedfcosr50fpn1xstarle90 | log | ckpt | | CSL | 27.4 | rotatedfcoscslgaussianr50fpn1xstarle90 | log | ckpt | | PSC | 30.5 | rotatedfcospscr50fpn1xstar_le90 | log | ckpt | | H2RBox-v2 | 27.3 | h2rboxv2pr50fpn1xstarle90 | log | ckpt | | RepPoints | 19.7 | rotatedreppointsr50fpn1xstaroc | log | ckpt | | CFA | 25.1 | cfar50fpn1xstar_le135 | log | ckpt | | Oriented RepPoints | 27.0 | orientedreppointsr50fpn1xstarle135 | log | ckpt | | | G-Rep | 26.9 | greppointsr50fpn1xstarle135 | log | ckpt | | SASM | 28.2 | sasmreppointsr50fpn1xstaroc | log | ckpt | p_bs=2 | | Faster RCNN | 32.6 | rotatedfasterrcnnr50fpn1xstar_le90 | log | ckpt | | Gliding Vertex | 30.7 | glidingvertexr50fpn1xstarle90 | log | ckpt | | Oriented RCNN | 33.2 | orientedrcnnr50fpn1xstarle90 | log | ckpt | | RoI Transformer | 35.7 | roitransr50fpn1xstarle90 | log | ckpt | | LSKNet-T | 34.7 | lsktfpn1xstar_le90 | log | ckpt | | LSKNet-S | 37.8 | lsksfpn1xstar_le90 | log | ckpt | | PKINet-S | 32.8 | pkinetsfpn1xstar_le90 | log | ckpt | | ReDet | 39.1 | redetre50refpn1xstar_le90 | log | ckpt | ReResNet50 | | Oriented RCNN | 40.7 | orientedrcnnswin-lfpn1xstarle90 | log | ckpt | Swin-L |

Citation

If you find this work helpful for your research, please consider giving this repo a star and citing our paper:

bibtex @article{li2024star, title={Star: A first-ever dataset and a large-scale benchmark for scene graph generation in large-size satellite imagery}, author={Li, Yansheng and Wang, Linlin and Wang, Tingzhu and Yang, Xue and Luo, Junwei and Wang, Qi and Deng, Youming and Wang, Wenbin and Sun, Xian and Li, Haifeng and others}, journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, year={2024}, publisher={IEEE} }

License

This project is released under the Apache license. Parts of this project contain code and models from other sources, which are subject to their respective licenses.

Owner

  • Name: VisionXLab
  • Login: VisionXLab
  • Kind: organization
  • Email: yangxue0827@126.com

VisionXLab at Shanghai Jiao Tong University, led by Prof. Xue Yang.

GitHub Events

Total
  • Issues event: 3
  • Watch event: 10
  • Issue comment event: 1
  • Push event: 1
  • Fork event: 1
Last Year
  • Issues event: 3
  • Watch event: 10
  • Issue comment event: 1
  • Push event: 1
  • Fork event: 1

Issues and Pull Requests

Last synced: 11 months ago

All Time
  • Total issues: 11
  • Total pull requests: 0
  • Average time to close issues: about 1 month
  • Average time to close pull requests: N/A
  • Total issue authors: 9
  • Total pull request authors: 0
  • Average comments per issue: 2.18
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 11
  • Pull requests: 0
  • Average time to close issues: about 1 month
  • Average time to close pull requests: N/A
  • Issue authors: 9
  • Pull request authors: 0
  • Average comments per issue: 2.18
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • hias-lcj (2)
  • AlNaCl (1)
  • luckytanyy (1)
  • xavibou (1)
  • yangxue0827 (1)
  • chagmgang (1)
  • 4del-Yousefi (1)
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Dependencies

docker/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
docker/serve/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
requirements/build.txt pypi
  • cython *
  • numpy *
requirements/docs.txt pypi
  • docutils ==0.16.0
  • markdown >=3.4.0
  • myst-parser *
  • sphinx ==4.0.2
  • sphinx-copybutton *
  • sphinx_markdown_tables >=0.0.16
  • sphinx_rtd_theme ==0.5.2
requirements/mminstall.txt pypi
  • mmcv-full >=1.5.0
requirements/optional.txt pypi
  • imagecorruptions *
  • scikit-learn *
  • scipy *
requirements/readthedocs.txt pypi
  • e2cnn *
  • mmcv *
  • mmdet >=2.25.1,<3.0.0
  • torch *
  • torchvision *
requirements/runtime.txt pypi
  • matplotlib *
  • mmcv-full *
  • mmdet >=2.25.1,<3.0.0
  • numpy *
  • pycocotools *
  • six *
  • terminaltables *
  • torch *
requirements/tests.txt pypi
  • asynctest * test
  • codecov * test
  • coverage * test
  • cython * test
  • flake8 * test
  • interrogate * test
  • isort ==4.3.21 test
  • kwarray * test
  • matplotlib * test
  • pytest * test
  • scikit-learn * test
  • ubelt * test
  • wheel * test
  • xdoctest >=0.10.0 test
  • yapf * test
requirements.txt pypi
setup.py pypi