ple

[IROS 2024] Learning from Spatio-temporal Correlation for Semi-Supervised LiDAR Semantic Segmentation

https://github.com/halbielee/ple

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (13.3%) to scientific vocabulary

Keywords

autonomous-driving lidar segmentation semi-supervised-learning semi-supervised-lidar-segmentation
Last synced: 6 months ago · JSON representation ·

Repository

[IROS 2024] Learning from Spatio-temporal Correlation for Semi-Supervised LiDAR Semantic Segmentation

Basic Info
Statistics
  • Stars: 6
  • Watchers: 1
  • Forks: 0
  • Open Issues: 1
  • Releases: 0
Topics
autonomous-driving lidar segmentation semi-supervised-learning semi-supervised-lidar-segmentation
Created over 1 year ago · Last pushed about 1 year ago
Metadata Files
Readme License Citation

README.md

Learning from Spatio-temporal Correlation for Semi-Supervised LiDAR Semantic Segmentation

arXiv Paper

Seungho Lee, Hwijeong Lee, Hyunjung Shim

Yonsei University, Korea Advanced Institute of Science and Technology

Introduction

This novel semi-supervised LiDAR segmentation method leverages spatio-temporal information between adjacent scans to generate high-quality pseudo-labels, achieving state-of-the-art performance on SemanticKITTI and nuScenes with minimal labeled data (as low as 5%). Notably, it outperforms previous SOTA results using only 20% of labeled data, making it highly efficient for real-world applications.

Dataset Preparation

To run semi-supervised LiDAR segmentation (SSLS), you'll need to download and preprocess the SemanticKITTI and nuScenes datasets. For detailed instructions on dataset preparation, please refer to our guide here📚.

Installation

Please refer to the installation guide for detailed instructions on setting up the environment. This code is slightly modified from the original MM3D repository for dynamically loading the path of the dataset. For the original MM3D repository, please refer to here.

Run: Proximity-based Label Estimation (PLE)

Option 1: Run the entire process

This repository includes an implementation of proximity-based label estimation (PLE). You can run the entire process using the following commands (note that processing all labeled ratios (0.5, 1, 2, 5, 10, 20, 50) will take several hours): cd generate_ple bash semantickitti.sh

For step-by-step implementation, follow these instructions: 1. Set your environment variables: DATASET_PATH=~/dataset/SemanticKITTI/dataset RATIO=0.5 # 0.5, 1, 2, 5, 10, 20, or 50. 2. Generate PLE-based pseudo labels: python semantickitti_02_ple.py \ --ratio $RATIO \ --base_path $DATASET_PATH \ --save_path $DATASET_PATH/PLE_$RATIO

  1. Evaluate the generated pseudo labels: python semantickitti_03_evaluate.py \ --gt $DATASET_PATH \ --pred $DATASET_PATH/PLE_$RATIO
  2. Create a list of pseudo labels: python semantickitti_04_make_pseudo_list.py \ --ratio $RATIO \ --base_path $DATASET_PATH \ --save_path $DATASET_PATH \ --pseudo_file_path $DATASET_PATH/PLE_$RATIO

For nuScenes dataset, please refer to nuscenes.sh in the same directory.

Option 2: Use pre-generated pseudo labels

If you want to use pre-generated pseudo labels, download the pseudo labels from the following links: - IROS2024_PLE

After downloading the pseudo labels, place them in the following directories: - SemanticKITTI: - ~/dataset/SemanticKITTI/dataset/PLE_$RATIO - ~/dataset/SemanticKITTI/dataset/semantickitti_infos_train.ple.${RATIO}.pkl - ~/dataset/SemanticKITTI/dataset/semantickitti_infos_train.ple.${RATIO}-unlabeled.pkl - nuScenes: - ~/dataset/nuScenes/PLE_$RATIO - ~/dataset/nuScenes/nuscenes_kitti_infos_train.ple.${RATIO}.pkl - ~/dataset/nuScenes/nuscenes_kitti_infos_train.ple.${RATIO}-unlabeled.pkl

Run: Training Dual-branch Network

Execute the following script to train the dual-branch network with PLE-based pseudo labels: bash bash script/lasermix_cy3d_mt_dualbranch_semi_semantickitti_ple.sh You can also train the MeanTeacher model or use the nuScenes dataset. For more details, refer to the script directory.

Results

Performance of mIoU on SemanticKITTI Dataset

| Method | 0.5% | 1% | 2% | 5% | 10% | 20% | 50% | |-------------------|------|-----|-----|-----|-----|-----|-----| | LaserMix | 47.3 | 55.5| 59.2| 61.7| 62.4| 62.4| 62.1| | PLE + Dual Branch | 52.2 | 61.1| 62.9| 62.8| 63.1| 64.1| 64.3|

Performance of mIoU on nuScenes Dataset

| Method | 0.5% | 1% | 2% | 5% | 10% | 20% | 50% | |-------------------|------|-----|-----|-----|-----|-----|-----| | LaserMix | 51.4 | 58.4| 63.9| 69.7| 71.6| 73.7| 73.7| | PLE + Dual Branch | 58.0 | 62.9| 67.2| 72.8| 74.3| 76.0| 76.1|

Please See the paper for more details.

Citation

If you find our work useful in your research, please cite: bibtex @article{lee2023learning, title={Learning from Spatio-temporal Correlation for Semi-Supervised LiDAR Semantic Segmentation}, author={Lee, Seungho and Lee, Hwijeong and Shim, Hyunjung}, journal={arXiv preprint arXiv:2308.12345}, year={2023} }

Acknowledgements

This code is hardly based on the LaserMix.

Owner

  • Name: Seungho, Lee
  • Login: halbielee
  • Kind: user

seungholee@yonsei.ac.kr

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
  - name: "MMDetection3D Contributors"
title: "OpenMMLab's Next-generation Platform for General 3D Object Detection"
date-released: 2020-07-23
url: "https://github.com/open-mmlab/mmdetection3d"
license: Apache-2.0

GitHub Events

Total
  • Watch event: 6
  • Issue comment event: 2
  • Push event: 1
  • Public event: 1
Last Year
  • Watch event: 6
  • Issue comment event: 2
  • Push event: 1
  • Public event: 1

Dependencies

projects/BEVFusion/setup.py pypi
projects/DSVT/setup.py pypi
requirements/build.txt pypi
requirements/docs.txt pypi
  • docutils ==0.16.0
  • markdown >=3.4.0
  • myst-parser *
  • sphinx ==4.0.2
  • sphinx-tabs *
  • sphinx_copybutton *
  • sphinx_markdown_tables >=0.0.16
  • tabulate *
  • urllib3 <2.0.0
requirements/mminstall.txt pypi
  • mmcv >=2.0.0rc4,<2.1.0
  • mmdet >=3.0.0,<3.2.0
  • mmengine >=0.7.1,<1.0.0
requirements/optional.txt pypi
  • black ==20.8b1
  • typing-extensions *
  • waymo-open-dataset-tf-2-6-0 *
requirements/readthedocs.txt pypi
  • mmcv >=2.0.0rc4
  • mmdet >=3.0.0
  • mmengine >=0.7.1
  • torch *
  • torchvision *
requirements/runtime.txt pypi
  • lyft_dataset_sdk *
  • networkx >=2.5
  • numba *
  • numpy *
  • nuscenes-devkit *
  • open3d *
  • plyfile *
  • scikit-image *
  • tensorboard *
  • trimesh *
requirements/tests.txt pypi
  • codecov * test
  • flake8 * test
  • interrogate * test
  • isort * test
  • kwarray * test
  • parameterized * test
  • pytest * test
  • pytest-cov * test
  • pytest-runner * test
  • ubelt * test
  • xdoctest >=0.10.0 test
  • yapf * test
requirements.txt pypi
setup.py pypi