bevdet_dual
Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (11.1%) to scientific vocabulary
Repository
Basic Info
- Host: GitHub
- Owner: Elaine-Blue
- License: apache-2.0
- Language: Python
- Default Branch: main
- Size: 22.9 MB
Statistics
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
- Releases: 0
Metadata Files
README.md
BEVDet_Dual

Introduction
We build a dual-branch bird=eye-view perception model and mainly refer to the following two papers: 1. https://arxiv.org/abs/2112.11790 2. https://arxiv.org/abs/2203.17054
Get Started
Installation and Data Preparation
step 1. Please prepare environment:
pip install torch=1.10.0+cu113 -f https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/linux-64/
pip install torchvision=0.11.1+cu113 -f https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/linux-64/
pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-1.11.0+cu113.html
pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-1.11.0+cu113.html
pip install torch-cluster -f https://pytorch-geometric.com/whl/torch-1.11.0+cu113.html
pip install torch-spline-conv -f https://pytorch-geometric.com/whl/torch-1.11.0+cu113.html
pip install -U -i https://pypi.tuna.tsinghua.edu.cn/simple torch_geometric==2.5.0
pip install mmcv-full=1.5.3 -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.10.0/index.html
pip install mmdet=2.25.1 mmsegmentation=1.0.0rc4 -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install numba==0.53.0
step 2. Prepare bevdet repo by.
shell script
git clone https://github.com/Wj-costumer/BEVDet_Dual.git
cd BEVDet
pip install -v -e .
step 3. Prepare nuScenes dataset as introduced in nuscenes_det.md and create the pkl for BEVDet by running:
shell
python tools/create_data_bevdet.py
step 4. For Occupancy Prediction task, download (only) the 'gts' from CVPR2023-3D-Occupancy-Prediction and arrange the folder as:
shell script
└── nuscenes
├── v1.0-trainval (existing)
├── sweeps (existing)
├── samples (existing)
└── gts (new)
step 4. Download models(v1.0) To test the model, please first download the trained model from the url(https://pan.baidu.com/s/1d7vXrqrM5304fumXX0sLBg?pwd=66is). And keep the models in the path workspace/ckpts/
Train model
```shell
single gpu
python tools/train.py configs/bevdetdualocc/bevdet-occ-r50-4d-stereo.py
multiple gpu
./tools/disttrain.sh configs/bevdetdualocc/bevdet-occ-r50-4d-stereo.py numgpu ```
Test model
```shell
single gpu perception
python tools/test.py configs/bevdetdualocc/bevdet-occ-r50-4d-stereo.py $checkpoint --eval mAP
multiple gpu perception
./tools/disttest.sh configs/bevdetdualocc/bevdet-occ-r50-4d-stereo.py $checkpoint numgpu --eval mAP
Entire Pipeline Test(Remained to be optimized)
python tools/inference.py configs/bevdetdualocc/bevdet-occ-r50-4d-stereo.py ckpts/bev_occ.pth ```
Next Steps
- Optimize the model structure
- Finish the tensorrt accelerating version
- Add fps test code
- Optimize the visualization code
Owner
- Name: Elaine_Blue
- Login: Elaine-Blue
- Kind: user
- Repositories: 1
- Profile: https://github.com/Elaine-Blue
Citation (CITATION.cff)
cff-version: 1.2.0 message: "If you use this software, please cite it as below." authors: - name: "MMDetection3D Contributors" title: "OpenMMLab's Next-generation Platform for General 3D Object Detection" date-released: 2020-07-23 url: "https://github.com/open-mmlab/mmdetection3d" license: Apache-2.0
GitHub Events
Total
Last Year
Dependencies
- actions/checkout v2 composite
- actions/setup-python v2 composite
- codecov/codecov-action v1.0.10 composite
- codecov/codecov-action v2 composite
- actions/checkout v2 composite
- actions/setup-python v2 composite
- actions/checkout v2 composite
- actions/setup-python v1 composite
- actions/checkout v2 composite
- actions/setup-python v2 composite
- nvcr.io/nvidia/tensorrt 22.07-py3 build
- pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
- docutils ==0.16.0
- m2r *
- mistune ==0.8.4
- myst-parser *
- sphinx ==4.0.2
- sphinx-copybutton *
- sphinx_markdown_tables *
- mmcv-full >=1.4.8,<=1.6.0
- mmdet >=2.24.0,<=3.0.0
- mmsegmentation >=0.20.0,<=1.0.0
- open3d *
- spconv *
- waymo-open-dataset-tf-2-1-0 ==1.2.0
- mmcv >=1.4.8
- mmdet >=2.24.0
- mmsegmentation >=0.20.1
- torch *
- torchvision *
- lyft_dataset_sdk *
- networkx >=2.2,<2.3
- numba ==0.53.0
- numpy *
- nuscenes-devkit *
- plyfile *
- scikit-image *
- tensorboard *
- trimesh >=2.35.39,<2.35.40
- asynctest * test
- codecov * test
- flake8 * test
- interrogate * test
- isort * test
- kwarray * test
- pytest * test
- pytest-cov * test
- pytest-runner * test
- ubelt * test
- xdoctest >=0.10.0 test
- yapf * test
- lap *
- line_profiler *
- motmetrics ==1.1.3
- numba *
- numpy *
- nuscenes-devkit *
- pandas >=0.24
- pyquaternion *
- pyyaml *
- shapely *
- sympy *