farp-net

The implementation of the paper: FARP-Net: Local-Global Feature Aggregation and Relation-Aware Proposals for 3D Object Detection.

https://github.com/xt-1997/farp-net

Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.1%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

The implementation of the paper: FARP-Net: Local-Global Feature Aggregation and Relation-Aware Proposals for 3D Object Detection.

Basic Info
  • Host: GitHub
  • Owner: XT-1997
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 9.11 MB
Statistics
  • Stars: 27
  • Watchers: 2
  • Forks: 7
  • Open Issues: 1
  • Releases: 0
Created over 3 years ago · Last pushed over 2 years ago
Metadata Files
Readme License Citation

README.md

TMM2023-FARP-Net: Local-Global Feature Aggregation and Relation-Aware Proposals for 3D Object Detection

This is a MMDetection3D implementation of the paper "FARP-Net: Local-Global Feature Aggregation and Relation-Aware Proposals for 3D Object Detection".

Prerequisites

The code is tested with Python3.7, PyTorch == 1.10, CUDA == 11.3, mmdet3d == 1.0.0rc2, mmcv_full == 1.5.0 and mmdet == 2.24.1. We recommend you to use anaconda to make sure that all dependencies are in place. Note that different versions of the library may cause changes in results.

Step 1. Create a conda environment and activate it. conda create --name pt1.10.v1 python=3.7 conda activate pt1.10.v1

Step 2. Install MMDetection3D following the instruction here.

Step 3. Prepare SUN RGB-D Data following the procedure here.

Getting Started

for sunrgbd

shell sh tools/slurm_train.sh $PARTION $JOB_NAME configs/A2FRPG/A2FRPG_16x8_sunrgbd-3d-10class.py $WORK_DIR

for scannet-1x-backbone

shell sh tools/slurm_train.sh $PARTION $JOB_NAME configs/configs/A2FRPG/A2FRPG_8x8_scannet-3d-18class.py $WORK_DIR

for scannet-2x-backbone

shell sh tools/slurm_train.sh $PARTION $JOB_NAME configs/configs/A2FRPG/A2FRPG_8x8_scannet-3d-18class-2x.py $WORK_DIR

for test the pretrained weight

shell sh tools/slurm_test.sh $PARTION $JOB_NAME configs/A2FRPG/A2FRPG_16x8_sunrgbd-3d-10class.py $PRETRAINED_CKPT --eval mAP --work-dir $WORK_DIR

Main Results

SUNRGB-D

| name | Lr schd | mAP@0.25 | Download | |-----------|---------|----------|----------| | A2FRPGNet | 3x | 64.1 | model | log |

ScanNet

| name | Lr schd | backbone | mAP@0.25 | Download | |-----------|---------|----------|---------|----------| | A2FRPGNet | 3x | 1x | 69.1 | model | log | | A2FRPGNet | 3x | 2x | 70.9 | model | log |

Bibtex

If this repo is helpful for you, please consider to cite it. Thank you! :)

bibtex @article{xie2023farp, title={FARP-Net: Local-Global Feature Aggregation and Relation-Aware Proposals for 3D Object Detection}, author={Xie, Tao and Wang, Li and Wang, Ke and Li, Ruifeng and Zhang, Xinyu and Zhang, Haoming and Yang, Linqi and Liu, Huaping and Li, Jun}, journal={IEEE Transactions on Multimedia}, year={2023}, publisher={IEEE} }

Owner

  • Login: XT-1997
  • Kind: user

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
  - name: "MMDetection3D Contributors"
title: "OpenMMLab's Next-generation Platform for General 3D Object Detection"
date-released: 2020-07-23
url: "https://github.com/open-mmlab/mmdetection3d"
license: Apache-2.0

GitHub Events

Total
  • Watch event: 1
Last Year
  • Watch event: 1

Dependencies

docker/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
docker/serve/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
mmdet3d.egg-info/requires.txt pypi
  • asynctest *
  • codecov *
  • flake8 *
  • interrogate *
  • isort *
  • kwarray *
  • lyft_dataset_sdk *
  • networkx <2.3,>=2.2
  • numba ==0.53.0
  • numpy *
  • nuscenes-devkit *
  • open3d *
  • plyfile *
  • pytest *
  • pytest-cov *
  • pytest-runner *
  • scikit-image *
  • spconv *
  • tensorboard *
  • trimesh <2.35.40,>=2.35.39
  • ubelt *
  • waymo-open-dataset-tf-2-1-0 ==1.2.0
  • xdoctest >=0.10.0
  • yapf *
requirements/docs.txt pypi
  • docutils ==0.16.0
  • m2r *
  • mistune ==0.8.4
  • myst-parser *
  • sphinx ==4.0.2
  • sphinx-copybutton *
  • sphinx_markdown_tables *
requirements/mminstall.txt pypi
  • mmcv-full >=1.4.8,<=1.5.0
  • mmdet >=2.19.0,<=3.0.0
  • mmsegmentation >=0.20.0,<=1.0.0
requirements/optional.txt pypi
  • open3d *
  • spconv *
  • waymo-open-dataset-tf-2-1-0 ==1.2.0
requirements/readthedocs.txt pypi
  • mmcv >=1.4.8
  • mmdet >=2.19.0
  • mmsegmentation >=0.20.1
  • torch *
  • torchvision *
requirements/runtime.txt pypi
  • lyft_dataset_sdk *
  • networkx >=2.2,<2.3
  • numba ==0.53.0
  • numpy *
  • nuscenes-devkit *
  • plyfile *
  • scikit-image *
  • tensorboard *
  • trimesh >=2.35.39,<2.35.40
requirements/tests.txt pypi
  • asynctest * test
  • codecov * test
  • flake8 * test
  • interrogate * test
  • isort * test
  • kwarray * test
  • pytest * test
  • pytest-cov * test
  • pytest-runner * test
  • ubelt * test
  • xdoctest >=0.10.0 test
  • yapf * test
requirements/build.txt pypi
requirements.txt pypi
setup.py pypi