ai4rs

AI for remote sensing, remote sense, object detection, computer vision, cv

https://github.com/wokaikaixinxin/ai4rs

Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (4.3%) to scientific vocabulary

Keywords

ai-for-remote-sensing ai-for-science cv mmdetection mmlab mmrotate object-detection oriented-bounding-box oriented-object-detection remote-sense remote-sensing rotated-object-detection
Last synced: 6 months ago · JSON representation ·

Repository

AI for remote sensing, remote sense, object detection, computer vision, cv

Basic Info
  • Host: GitHub
  • Owner: wokaikaixinxin
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 10.4 MB
Statistics
  • Stars: 17
  • Watchers: 2
  • Forks: 1
  • Open Issues: 0
  • Releases: 0
Topics
ai-for-remote-sensing ai-for-science cv mmdetection mmlab mmrotate object-detection oriented-bounding-box oriented-object-detection remote-sense remote-sensing rotated-object-detection
Created 10 months ago · Last pushed 6 months ago
Metadata Files
Readme Contributing License Citation

README.md

[📘使用文档](https://mmrotate.readthedocs.io/zh_CN/1.x/) | [🛠️安装教程](https://mmrotate.readthedocs.io/zh_CN/1.x/get_started.html) | [👀模型库](https://mmrotate.readthedocs.io/zh_CN/1.x/model_zoo.html) | [📘Documentation](https://mmrotate.readthedocs.io/en/1.x/) | [🛠️Installation](https://mmrotate.readthedocs.io/en/1.x/get_started.html) | [👀Model Zoo](https://mmrotate.readthedocs.io/en/1.x/model_zoo.html)

Introduction

We hope to integrate remote sensing related work based on MMLab, especially MMDetection and MMRotate.

Model Zoo

YOLO | | | | | | :---: | :---: | :---: | :---: | |[Rotated YOLOX (arXiv 2021)](./projects/rotated_yolox/README.md) | [Rotated YOLOMS (TPAMI 2025)](./projects/rotated_yoloms/README.md) | | |
Oriented Object Detection - Architecture | | | | | | :---: | :---: | :---: | :---: | | [Rotated RetinaNet-OBB/HBB
(ICCV'2017)](configs/rotated_retinanet/README.md) | [Rotated FasterRCNN-OBB
(TPAMI'2017)](configs/rotated_faster_rcnn/README.md) | [Rotated RepPoints-OBB
(ICCV'2019)](configs/rotated_reppoints/README.md) | [Rotated FCOS
(ICCV'2019)](configs/rotated_fcos/README.md) | | [RoI Transformer
(CVPR'2019)](configs/roi_trans/README.md) | [Gliding Vertex
(TPAMI'2020)](configs/gliding_vertex/README.md) | [Rotated ATSS-OBB
(CVPR'2020)](configs/rotated_atss/README.md) | [R3Det
(AAAI'2021)](configs/r3det/README.md) | | [S2A-Net
(TGRS'2021)](configs/s2anet/README.md) | [ReDet
(CVPR'2021)](configs/redet/README.md) | [Beyond Bounding-Box
(CVPR'2021)](configs/cfa/README.md) | [Oriented R-CNN
(ICCV'2021)](configs/oriented_rcnn/README.md) | | [Rotated YOLOX
(arXiv 2021)](./projects/rotated_yolox/README.md) |[SASM
(AAAI'2022)](configs/sasm_reppoints/README.md) | [Oriented RepPoints
(CVPR'2022)](configs/oriented_reppoints/README.md) | [RTMDet
(arXiv 2022)](configs/rotated_rtmdet/README.md) | |[Rotated DiffusionDet
(ICCV'2023)](./projects/rotated_DiffusionDet/README.md) | [OrientedFormer
(TGRS' 2024)](projects/OrientedFormer/README.md)|[ReDiffDet base
(CVPR'2025)](./projects/GSDet_baseline/README_ReDiffDet_baseline.md)|[GSDet base
(IJCAI'2025)](./projects/GSDet_baseline/README_GSDet_baseline.md)| | [Rotated YOLOMS
(TPAMI 2025)](./projects/rotated_yoloms/README.md) | | | |
Oriented Object Detection - Loss | | | | | | :---: | :---: | :---: | :---: | | [GWD
(ICML'2021)](configs/gwd/README.md) | [KLD
(NeurIPS'2021)](configs/kld/README.md) | [KFIoU
(ICLR'2023)](configs/kfiou/README.md) | |
Oriented Object Detection - Coder | | | | | | | :---: | :---: | :---: | :---: | :---: | | [CSL
(ECCV'2020)](configs/csl/README.md) | [Oriented R-CNN
(ICCV'2021)](configs/oriented_rcnn/README.md) | [PSC
(CVPR'2023)](configs/psc/README.md) | [ACM
(CVPR'2024)](./projects/ACM/README.md) |[GauCho
(CVPR'2025)](projects/GauCho/README.md) |
Oriented Object Detection - Backbone | | | | | | | :---: | :---: | :---: | :---: | :---: | | [ConvNeXt
(CVPR'2022)](./configs/convnext/README.md)| [LSKNet
(ICCV'2023)](projects/LSKNet/README.md) | [ARC
(ICCV'2023)](./projects/ARC/README.md) | [PKINet
(CVPR'2024)](./projects/PKINet/README.md) | [SARDet 100K
(Nips'2024)](./projects/SARDet_100K/README.md) | | [GRA
(ECCV'2024)](./projects/GRA/README.md) |[Strip R-CNN
(Arxiv'2025)](./projects/Strip_RCNN/README.md) | [LEGNet
(ICCVW'2025)](./projects/LEGNet/README.md) | [LWGANet
(Arxiv'2025)](./projects/LWGANet/README.md) | |
Oriented Object Detection - Weakly Supervise | | | | | | | :---: | :---: | :---: | :---: | :---: | | [H2RBox
(ICLR'2023)](configs/h2rbox/README.md) | [H2RBox-v2
(Nips'2023)](configs/h2rbox_v2/README.md) | [Point2Rbox
(CVPR'2024)](./projects/Point2Rbox/README.md) | [Point2Rbox-v2
(CVPR'2025)](./projects/Point2Rbox_v2/README.md)| [WhollyWOOD
(TPAMI'2025)](./projects/WhollyWOOD/README.md) |
Oriented Object Detection - Semi Supervise Coming soon

SAR

| | | | | | :---: | :---: | :---: | :---: | | SARDet 100K
(Nips'2024)
| RSAR
(CVPR'2025)
| | |

SAM

| | | | | | :---: | :---: | :---: | :---: | | MMRotate SAM | | | |

Installation

To support H2rbox_v2, point2rbox, and mamba, we use pytorch-2.x

Step 1: Install Anaconda or Miniconda

Step 2: Create a virtual environment

conda create --name ai4rs python=3.10 -y conda activate ai4rs

Step 3: Install Pytorch according to official instructions. For example:

conda install pytorch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 pytorch-cuda=12.1 -c pytorch -c nvidia

Verify whether pytorch supports cuda

python -c "import torch; print(torch.cuda.is_available())"

Step 4: Install MMEngine and MMCV, and we recommend using MIM to complete the installation

pip install -U openmim -i https://pypi.tuna.tsinghua.edu.cn/simple mim install mmengine -i https://pypi.tuna.tsinghua.edu.cn/simple mim install "mmcv>2.0.0rc4, <2.2.0" -i https://pypi.tuna.tsinghua.edu.cn/simple

Step 5: Install MMDetection

mim install 'mmdet>3.0.0rc6, <3.4.0' -i https://pypi.tuna.tsinghua.edu.cn/simple

Step 6: Install ai4rs

``` git clone https://github.com/wokaikaixinxin/ai4rs.git cd ai4rs pip install -v -e . -i https://pypi.tuna.tsinghua.edu.cn/simple

"-v" means verbose, or more output

"-e" means installing a project in editable mode,

thus any local modifications made to the code will take effect without reinstallation.

```

Step 7: Version of NumPy

If the NumPy version is incompatible, downgrade the NumPy version to 1.x.

``` A module that was compiled using NumPy 1.x cannot be run in NumPy 2.0.1 as it may crash. To support both 1.x and 2.x versions of NumPy, modules must be compiled with NumPy 2.0. Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.

If you are a user of the module, the easiest solution will be to downgrade to 'numpy<2' or try to upgrade the affected module. We expect that some modules will need time to support NumPy 2. ```

pip install "numpy<2" -i https://pypi.tuna.tsinghua.edu.cn/simple

Data Preparation

Please refer to data_preparation.md to prepare the data

| | | | | | :---: | :---: | :---: | :---: | | DOTA | DIOR | SSDD | HRSC |
| HRSID | SRSDD | RSDD | ICDAR2015 |
| SARDet 100K | RSAR | FAIR1M | |

Train

Single-node single-GPU
python tools/train.py config_path
For example:
python tools/train.py projects/GSDet_baseline/configs/GSDet_r50_b900_h2h4_h2r1_r2r1_2x_dior.py

Single-node multi-GPU
bash tools/dist_train.sh config_path num_gpus
For example:
bash tools/dist_train.sh projects/GSDet_baseline/configs/GSDet_r50_b900_h2h4_h2r1_r2r1_2x_dior.py 2

Test

Single-node single-GPU
python tools/test.py config_path checkpoint_path
For example:
python tools/test.py configs/h2rbox_v2/h2rbox_v2-le90_r50_fpn-1x_dota.py work_dirs/h2rbox_v2-le90_r50_fpn-1x_dota-fa5ad1d2.pth

Single-node multi-GPU
bash tools/dist_test.sh config_path checkpoint_path num_gpus
For example:
bash tools/dist_test.sh configs/h2rbox_v2/h2rbox_v2-le90_r50_fpn-1x_dota.py work_dirs/h2rbox_v2-le90_r50_fpn-1x_dota-fa5ad1d2.pth 2

Getting Started

Please see Overview for the general introduction of Openmmlab.

For detailed user guides and advanced guides, please refer to our documentation:

FAQ

Please refer to FAQ for frequently asked questions.

Acknowledgement

OpenMMLab

OpenMMLab platform

MMDetection

MMRotate

Citation

If you use this toolbox or benchmark in your research, please cite this project ai4rs

```bibtex

```

Owner

  • Login: wokaikaixinxin
  • Kind: user

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
  - name: "ai4rs Contributors"
title: "AI for Remote Sensing"
date-released: 2025-07-01
url: "https://github.com/wokaikaixinxin/ai4rs"
license: Apache-2.0

GitHub Events

Total
  • Issues event: 2
  • Watch event: 12
  • Push event: 146
Last Year
  • Issues event: 2
  • Watch event: 12
  • Push event: 146

Dependencies

.github/workflows/lint.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
.github/workflows/merge_stage_test.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
  • codecov/codecov-action v1.0.14 composite
.github/workflows/pr_stage_test.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
.github/workflows/publish-to-pypi.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v1 composite
.github/workflows/test_mim.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
.circleci/docker/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
docker/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
docker/serve/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
requirements/build.txt pypi
  • cython *
  • numpy ==1.26.4
requirements/docs.txt pypi
  • docutils ==0.16.0
  • myst-parser *
  • sphinx ==4.0.2
  • sphinx-copybutton *
  • sphinx_markdown_tables *
  • sphinx_rtd_theme ==0.5.2
requirements/mminstall.txt pypi
  • mmcv >=2.0.0rc2,<2.1.0
  • mmdet >=3.0.0rc2,<3.1.0
  • mmengine >=0.1.0
requirements/optional.txt pypi
  • imagecorruptions *
  • scikit-learn *
  • scipy *
requirements/readthedocs.txt pypi
  • e2cnn *
  • mmcv >=2.0.0rc2
  • mmdet >=3.0.0rc2
  • mmengine >=0.1.0
  • torch *
  • torchvision *
requirements/runtime.txt pypi
  • matplotlib *
  • numpy ==1.26.4
  • pycocotools *
  • six *
  • terminaltables *
  • torch *
requirements/tests.txt pypi
  • asynctest * test
  • codecov * test
  • coverage * test
  • cython * test
  • flake8 * test
  • interrogate * test
  • isort ==4.3.21 test
  • kwarray * test
  • matplotlib * test
  • parameterized * test
  • pytest * test
  • scikit-learn * test
  • ubelt * test
  • wheel * test
  • xdoctest >=0.10.0 test
  • yapf * test
requirements.txt pypi
setup.py pypi