futr3d

Code for paper: FUTR3D: a unified sensor fusion framework for 3d detection

https://github.com/tsinghua-mars-lab/futr3d

Science Score: 62.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
    Organization tsinghua-mars-lab has institutional domain (group.iiis.tsinghua.edu.cn)
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (10.0%) to scientific vocabulary
Last synced: 8 months ago · JSON representation ·

Repository

Code for paper: FUTR3D: a unified sensor fusion framework for 3d detection

Basic Info
  • Host: GitHub
  • Owner: Tsinghua-MARS-Lab
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 92.9 MB
Statistics
  • Stars: 304
  • Watchers: 16
  • Forks: 41
  • Open Issues: 29
  • Releases: 0
Created almost 4 years ago · Last pushed almost 3 years ago
Metadata Files
Readme License Citation

README.md

FUTR3D: A Unified Sensor Fusion Framework for 3D Detection

This repo implements the paper FUTR3D: A Unified Sensor Fusion Framework for 3D Detection. Paper - project page

We built our implementation upon MMdetection3D 1.0.0rc6. The major part of the code is in the directory plugin/futr3d.

Environment

Prerequisite

  1. mmcv-full>=1.5.2, <=1.7.0
  2. mmdet>=2.24.0, <=3.0.0
  3. mmseg>=0.20.0, <=1.0.0
  4. nuscenes-devkit

Installation

There is no neccesary to install mmdet3d separately, please install based on this repo:

cd futr3d pip3 install -v -e .

Data

Please follow the mmdet3d to process the data. mmdet3dnuscenesguidance

Notably, we have modified the nuscenes_converter.py to add the radar infomation, so the infos.pkl generated by our code is different from the original code. The other infomation except the radar infos is the same with the original infos.pkl.

Train

For example, to train FUTR3D with LiDAR only on 8 GPUs, please use

bash tools/dist_train.sh plugin/futr3d/configs/lidar_only/lidar_0075_900q.py 8

For LiDAR-Cam and Cam-Radar version, we need pre-trained model.

The Cam-Radar uses DETR3D model as pre-trained model, please check DETR3D.

The LiDAR-Cam uses fused LiDAR-only and Cam-only model as pre-trained model. You can use

python tools/fuse_model.py --img <cam checkpoint path> --lidar <lidar checkpoint path> --out <out model path> to fuse cam-only and lidar-only models.

Evaluate

For example, to evalaute FUTR3D with LiDAR-cam on 8 GPUs, please use

bash tools/dist_train.sh plugin/futr3d/configs/lidar_cam/lidar_0075_cam_res101.py ../lidar_cam.pth 8 --eval bbox

Results

LiDAR & Cam

| models | mAP | NDS | Link | | ----------- | ----------- | ----| ---- | | Res101 + VoxelNet | 67.4 | 70.9 | model| | VoVNet + VoxelNet | 70.3 | 73.1 | model |

Cam & Radar

| models | mAP | NDS | Link | | ----------- | ----------- | ----| ----- | | Res101 + Radar | 39.9 | 50.8 | model |

LiDAR only

| models | mAP | NDS | Link | | ----------- | ----------- | ----| ----| | 32 beam VoxelNet | 63.3 | 68.9 | model| | 4 beam VoxelNet | 44.3 | 56.4 | | 1 beam VoxelNet | 16.9 | 39.2 |

Cam only

The camera-only version of FUTR3D is the same as DETR3D. Please check DETR3D for detail implementation.

Acknowledgment

For the implementation, we rely heavily on MMCV, MMDetection, MMDetection3D, and DETR3D

Related projects

  1. DETR3D: 3D Object Detection from Multi-view Images via 3D-to-2D Queries
  2. MUTR3D: A Multi-camera Tracking Framework via 3D-to-2D Queries
  3. For more projects on Autonomous Driving, check out our Visual-Centric Autonomous Driving (VCAD) project page webpage

Reference

@article{chen2022futr3d, title={FUTR3D: A Unified Sensor Fusion Framework for 3D Detection}, author={Chen, Xuanyao and Zhang, Tianyuan and Wang, Yue and Wang, Yilun and Zhao, Hang}, journal={arXiv preprint arXiv:2203.10642}, year={2022} }

Contact: Xuanyao Chen at: xuanyaochen19@fudan.edu.cn or ixyaochen@gmail.com

Owner

  • Name: Tsinghua MARS Lab
  • Login: Tsinghua-MARS-Lab
  • Kind: organization
  • Location: Beijing, China

MARS Lab at IIIS, Tsinghua University

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
  - name: "MMDetection3D Contributors"
title: "OpenMMLab's Next-generation Platform for General 3D Object Detection"
date-released: 2020-07-23
url: "https://github.com/open-mmlab/mmdetection3d"
license: Apache-2.0

GitHub Events

Total
  • Issues event: 4
  • Watch event: 40
  • Issue comment event: 5
  • Fork event: 3
Last Year
  • Issues event: 4
  • Watch event: 40
  • Issue comment event: 5
  • Fork event: 3

Issues and Pull Requests

Last synced: 8 months ago

All Time
  • Total issues: 1
  • Total pull requests: 0
  • Average time to close issues: about 1 hour
  • Average time to close pull requests: N/A
  • Total issue authors: 1
  • Total pull request authors: 0
  • Average comments per issue: 1.0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 1
  • Pull requests: 0
  • Average time to close issues: about 1 hour
  • Average time to close pull requests: N/A
  • Issue authors: 1
  • Pull request authors: 0
  • Average comments per issue: 1.0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • Small-NengNeng (1)
  • nsa05605 (1)
  • young6man (1)
  • qq1018408006 (1)
  • sidiangongyuan (1)
  • KevinCodeGitHub (1)
  • CesarLiu (1)
  • huangyuanhao (1)
Pull Request Authors
  • sidiangongyuan (1)
Top Labels
Issue Labels
Pull Request Labels

Dependencies

docker/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
docker/serve/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
requirements/build.txt pypi
requirements/docs.txt pypi
  • docutils ==0.16.0
  • m2r *
  • mistune ==0.8.4
  • myst-parser *
  • sphinx ==4.0.2
  • sphinx-copybutton *
  • sphinx_markdown_tables *
requirements/mminstall.txt pypi
  • mmcv-full >=1.4.8,<=1.6.0
  • mmdet >=2.24.0,<=3.0.0
  • mmsegmentation >=0.20.0,<=1.0.0
requirements/optional.txt pypi
  • open3d *
  • spconv *
  • waymo-open-dataset-tf-2-1-0 ==1.2.0
requirements/readthedocs.txt pypi
  • mmcv >=1.4.8
  • mmdet >=2.24.0
  • mmsegmentation >=0.20.1
  • torch *
  • torchvision *
requirements/runtime.txt pypi
  • lyft_dataset_sdk *
  • networkx >=2.2,<2.3
  • numba ==0.53.0
  • numpy *
  • nuscenes-devkit *
  • plyfile *
  • scikit-image *
  • tensorboard *
  • trimesh >=2.35.39,<2.35.40
requirements/tests.txt pypi
  • asynctest * test
  • codecov * test
  • flake8 * test
  • interrogate * test
  • isort * test
  • kwarray * test
  • pytest * test
  • pytest-cov * test
  • pytest-runner * test
  • ubelt * test
  • xdoctest >=0.10.0 test
  • yapf * test
requirements.txt pypi
setup.py pypi