lidarweather
[ECCV 2024 Oral] Official code of "Rethinking Data Augmentation for Robust LiDAR Semantic Segmentation in Adverse Weather".
Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (12.7%) to scientific vocabulary
Repository
[ECCV 2024 Oral] Official code of "Rethinking Data Augmentation for Robust LiDAR Semantic Segmentation in Adverse Weather".
Basic Info
Statistics
- Stars: 48
- Watchers: 2
- Forks: 2
- Open Issues: 5
- Releases: 0
Metadata Files
README.md
[ECCV 2024 Oral] Rethinking Data Augmentation for Robust LiDAR Semantic Segmentation in Adverse Weather
Junsung Park, Kyungmin Kim, Hyunjung ShimCVML Lab. KAIST AI.
[Project Page]
About
Official implementation of "Rethinking Data Augmentation for Robust LiDAR Semantic Segmentation in Adverse Weather", accepted in ECCV 2024.
Existing LiDAR semantic segmentation methods often struggle in adverse weather conditions. Previous work has addressed this by simulating adverse weather or using general data augmentation, but lacks detailed analysis of the negative effects on performance. We identified key factors of adverse weather affecting performance: geometric perturbation from refraction and point drop due to energy absorption and occlusions. Based on these findings, we propose new data augmentation techniques: Selective Jittering (SJ) to mimic geometric perturbation and Learnable Point Drop (LPD) to approximate point drop patterns using a Deep Q-Learning Network. These techniques enhance the model by exposing it to identified vulnerabilities without precise weather simulation.
Fig. The overall training process of our methods.
Updates
- [2024.09] - Our paper is selected as ORAL PRESENTATION in ECCV 2024! Link
- [2024.08] - Our project page is opened! Check it out in here!
- [2024.08] - Official implementation is released! Also, our paper is available on arXiv, click here to check it out.
Contents
Installation
```Shell conda create -n lidarweather python=3.8 -y && conda activate lidarweather conda install pytorch==1.10.0 torchvision==0.11.0 cudatoolkit=11.3 -c pytorch -y pip install -U openmim && mim install mmengine && mim install 'mmcv>=2.0.0rc4, <2.1.0' && mim install 'mmdet>=3.0.0, <3.2.0'
git clone https://github.com/engineerJPark/LiDARWeather.git cd LiDARWeather && pip install -v -e .
pip install cumm-cu113 && pip install spconv-cu113 sudo apt-get install libsparsehash-dev export PATH=/usr/local/cuda/bin:$PATH && pip install --upgrade git+https://github.com/mit-han-lab/torchsparse.git@v1.4.0 pip install nuscenes-devkit pip install wandb ```
Please refer to INSTALL.md for the installation details.
Data Preparation
Please refer to DATA_PREPARE.md for the details to prepare the 1SemanticKITTI, 2SynLiDAR, 3SemanticSTF, and 4SemanticKITTI-C datasets.
Getting Started
Train
```python ./tools/disttrain.sh configs/lidarweatherminkunet/sj+lpd+minkunet_semantickitti.py 4
./tools/disttrain.sh projects/CENet/lidarweathercenet/sj+lpd+cenet_semantickitti.py 4 ```
Test
```python python tools/test.py configs/lidarweatherminkunet/sj+lpd+minkunetsemantickitti.py workdirs/sj+lpd+minkunetsemantickitti/epoch_15.pth
python tools/test.py projects/CENet/lidarweathercenet/sj+lpd+cenetsemantickitti.py workdirs/sj+lpd+cenetsemantickitti/epoch_50.pth ```
Please refer to GET_STARTED.md to learn more details.
Main Results
SemanticKITTI → SemanticSTF
| Methods | D-fog |
L-fog |
Rain |
Snow |
mIoU |
|---|---|---|---|---|---|
| Oracle | 51.9 | 54.6 | 57.9 | 53.7 | 54.7 |
| Baseline | 30.7 | 30.1 | 29.7 | 25.3 | 31.4 |
| LaserMix | 23.2 | 15.5 | 9.3 | 7.8 | 14.7 |
| PolarMix | 21.3 | 14.9 | 16.5 | 9.3 | 15.3 |
| PointDR* | 37.3 | 33.5 | 35.5 | 26.9 | 33.9 |
| Baseline+SJ+LPD | 36.0 | 37.5 | 37.6 | 33.1 | 39.5 |
| Increments to baseline | +5.3 | +7.4 | +7.9 | +7.8 | +8.1 |
SynLiDAR → SemanticSTF
| Methods | D-fog |
L-fog |
Rain |
Snow |
mIoU |
|---|---|---|---|---|---|
| Oracle | 51.9 | 54.6 | 57.9 | 53.7 | 54.7 |
| Baseline | 15.24 | 15.97 | 16.83 | 12.76 | 15.45 |
| LaserMix | 15.32 | 17.95 | 18.55 | 13.8 | 16.85 |
| PolarMix | 16.47 | 18.69 | 19.63 | 15.98 | 18.09 |
| PointDR* | 19.09 | 20.28 | 25.29 | 18.98 | 19.78 |
| Baseline+SJ+LPD | 19.08 | 20.65 | 21.97 | 17.27 | 20.08 |
| Increments to baseline | +3.8 | +4.7 | +5.1 | +4.5 | +4.6 |
Other Models & Dataset
| Method | SemanticSTF | SemanticKITTI-C |
|---|---|---|
| CENet | 14.2 | 49.3 |
| CENet+Ours | 22.0 (+7.8) | 53.2 (+3.9) |
| SPVCNN | 28.1 | 52.5 |
| SPVCNN+Ours | 38.4 (+10.3) | 52.9 (+0.4) |
| Minkowski | 31.4 | 53.0 |
| Minkowski+Ours | 39.5 (+8.1) | 58.6 (+5.6) |
Qualitative Results
Fig. Qualitative results of our methods.
License
This work is under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Acknowledgement

Our codebase builds heavily on MMDetection3D and PyTorch DQN Tutorials. MMDetection3D is an open-source toolbox based on PyTorch, towards the next-generation platform for general 3D perception. It is a part of the OpenMMLab project developed by MMLab.
Citation
If you find this work helpful, please kindly consider citing our paper:
bibtex
@article{park2024rethinking,
title={Rethinking Data Augmentation for Robust LiDAR Semantic Segmentation in Adverse Weather},
author={Park, Junsung and Kim, Kyungmin and Shim, Hyunjung},
journal={arXiv preprint arXiv:2407.02286},
year={2024}
}
This citation will be updated after the proceedings are published.
Owner
- Name: EngineerJPark
- Login: engineerJPark
- Kind: user
- Company: Korea University
- Website: https://knowledgeforengineers.tistory.com/
- Repositories: 6
- Profile: https://github.com/engineerJPark
Interested in : DeepLearning, Computer Vision, specially in Segmentation
Citation (CITATION.cff)
cff-version: 1.2.0 message: "If you use this software, please cite it as below." authors: - name: "MMDetection3D Contributors" title: "OpenMMLab's Next-generation Platform for General 3D Object Detection" date-released: 2020-07-23 url: "https://github.com/open-mmlab/mmdetection3d" license: Apache-2.0
GitHub Events
Total
- Issues event: 6
- Watch event: 24
- Issue comment event: 9
- Fork event: 1
Last Year
- Issues event: 6
- Watch event: 24
- Issue comment event: 9
- Fork event: 1
Dependencies
- pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
- pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
- docutils ==0.16.0
- markdown >=3.4.0
- myst-parser *
- sphinx ==4.0.2
- sphinx-tabs *
- sphinx_copybutton *
- sphinx_markdown_tables >=0.0.16
- tabulate *
- urllib3 <2.0.0
- mmcv >=2.0.0rc4,<2.1.0
- mmdet >=3.0.0,<3.2.0
- mmengine >=0.7.1,<1.0.0
- black ==20.8b1
- typing-extensions *
- waymo-open-dataset-tf-2-6-0 *
- mmcv >=2.0.0rc4
- mmdet >=3.0.0
- mmengine >=0.7.1
- torch *
- torchvision *
- lyft_dataset_sdk *
- networkx >=2.5
- numba *
- numpy *
- nuscenes-devkit *
- open3d *
- plyfile *
- scikit-image *
- tensorboard *
- trimesh *
- codecov * test
- flake8 * test
- interrogate * test
- isort * test
- kwarray * test
- parameterized * test
- pytest * test
- pytest-cov * test
- pytest-runner * test
- ubelt * test
- xdoctest >=0.10.0 test
- yapf * test