macp
[WACV 2024] MACP: Efficient Model Adaptation for Cooperative Perception.
Science Score: 26.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (11.4%) to scientific vocabulary
Keywords
Repository
[WACV 2024] MACP: Efficient Model Adaptation for Cooperative Perception.
Basic Info
- Host: GitHub
- Owner: PurdueDigitalTwin
- License: mit
- Language: Python
- Default Branch: master
- Homepage: https://purduedigitaltwin.github.io/MACP/
- Size: 106 MB
Statistics
- Stars: 16
- Watchers: 0
- Forks: 1
- Open Issues: 2
- Releases: 0
Topics
Metadata Files
README.md
MACP: Efficient Model Adaptation for Cooperative Perception
The official repository for the WACV 2024 paper MACP: Efficient Model Adaptation for Cooperative Perception. This work proposes a novel method to adapt a single-agent pretrained model to a V2V cooperative perception setting. It achieves state-of-the-art performance on both the V2V4Real and the OPV2V datasets.
Setup
Our project is based on MMDetection3D v1.1.0. Please refer to the official documentation to set up the environment.
Data Preparation
Download the V2V4Real and OPV2V datasets.
Once the data is downloaded, it's necessary organize the data in the following structure:
$REPO_ROOT
data
v2v4real
train
testoutput_CAV_data_2022-03-15-09-54-40_0 # data folder
test
| | openv2v
train
2021_08_16_22_26_54 # data folder
test
| | | validate
| | | test_culver_city
Then, run the script files scripts/create_v2v4real.sh and scripts/create_openv2v.sh to prepare the cached data.
Notes
- The core code of our project is in the
projects/Coperceptionfolder. - The voxelization OP in the original implementation of
BEVFusionis different from the implementation in MMCV. Please refer here to compile the OP on CUDA.
MACP Weights
If you are interested in including any other pretrained weights or details, please open an issue or contact us.
| Model | Backbone | Checkpoint | Config | AP@50 | AP@70 | Log |
|:-------------:|:---------------:|:-----------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------:|:-----:|:-----:|:-----------------------------------------------------------------------------------------------------:|
| MACP-V2V4Real | BEVFusion-LiDAR | Google Drive | Google Drive | 67.6 | 47.9 | Google Drive |
| MACP-OPV2V | BEVFusion-LiDAR | Google Drive | Google Drive | 93.7 | 90.3 | Google Drive |
Training
We train our model on one NVIDIA RTX 4090 GPU with 24GB memory. The training command is as follows:
bash
cd /path/to/repo
export PYTHONPATH=$PWD:$PYTHONPATH
python tools/train.py path/to/config
Evaluation
The evaluation command is as follows:
bash
cd /path/to/repo
export PYTHONPATH=$PWD:$PYTHONPATH
python tools/test.py path/to/config path/to/checkpoint
Citation
If you find our work useful in your research, please consider citing:
bibtex
@inproceedings{ma2024macp,
title={MACP: Efficient Model Adaptation for Cooperative Perception},
author={Ma, Yunsheng and Lu, Juanwu and Cui, Can and Zhao, Sicheng and Cao, Xu and Ye, Wenqian and Wang, Ziran},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
pages={3373--3382},
year={2024}
}
Acknowledgement
This project is based on code from several open-source projects. We would like to thank the authors for their great work:
Owner
- Name: Purdue Digital Twin Lab
- Login: PurdueDigitalTwin
- Kind: organization
- Location: West Lafayette, IN
- Repositories: 3
- Profile: https://github.com/PurdueDigitalTwin
Purdue Digital Twin Lab aims to build digital replicas of real-world entities based on AI, big data, cloud/edge computing, and mixed reality.
GitHub Events
Total
- Watch event: 4
Last Year
- Watch event: 4