lasermix

[CVPR 2023 Highlight] LaserMix for Semi-Supervised LiDAR Semantic Segmentation

https://github.com/ldkong1205/lasermix

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org, scholar.google
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (6.1%) to scientific vocabulary

Keywords

autonomous-driving lidar segmentation semi-supervised-learning
Last synced: 6 months ago · JSON representation ·

Repository

[CVPR 2023 Highlight] LaserMix for Semi-Supervised LiDAR Semantic Segmentation

Basic Info
  • Host: GitHub
  • Owner: ldkong1205
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Homepage: https://ldkong.com/LaserMix
  • Size: 8.61 MB
Statistics
  • Stars: 289
  • Watchers: 13
  • Forks: 18
  • Open Issues: 6
  • Releases: 0
Topics
autonomous-driving lidar segmentation semi-supervised-learning
Created over 3 years ago · Last pushed almost 2 years ago
Metadata Files
Readme License Citation

README.md


LaserMix for Semi-Supervised LiDAR Semantic Segmentation

Lingdong KongJiawei RenLiang PanZiwei Liu
S-Lab, Nanyang Technological University

About

LaserMix is a semi-supervised learning (SSL) framework designed for LiDAR semantic segmentation. It leverages the strong spatial prior of driving scenes to construct low-variation areas via laser beam mixing, and encourages segmentation models to make confident and consistent predictions before and after mixing.



Fig. Illustration for laser beam partition based on inclination φ.


Visit our project page to explore more details. :red_car:

Updates

  • [2024.05] - Our improved framework, LaserMix++ :rocket:, is avaliable on arXiv.
  • [2024.01] - The toolkit tailored for The RoboDrive Challenge has been released. :hammerandwrench:
  • [2023.12] - We are hosting The RoboDrive Challenge at ICRA 2024. :blue_car:
  • [2023.12] - Introducing FRNet, an efficient and effective real-time LiDAR segmentation model that achieves promising semi-supervised learning results on SemanticKITTI and nuScenes. Code and checkpoints are available for downloading.
  • [2023.03] - Intend to test the robustness of your LiDAR semantic segmentation models? Check our recent work, :robot: Robo3D, a comprehensive suite that enables OoD robustness evaluation of 3D segmentors on our newly established datasets: SemanticKITTI-C, nuScenes-C, and WOD-C.
  • [2023.03] - LaserMix was selected as a :sparkles: highlight :sparkles: at CVPR 2023 (top 10% of accepted papers).
  • [2023.02] - LaserMix was accepted to CVPR 2023! :tada:
  • [2023.02] - LaserMix has been integrated into the MMDetection3D codebase! Check this PR in the dev-1.x branch to know more details. :beers:
  • [2023.01] - As suggested, we will establish a sequential track taking into account the LiDAR data collection nature in our semi-supervised LiDAR semantic segmentation benchmark. The results will be gradually updated in RESULT.md.
  • [2022.12] - We support a wider range of LiDAR segmentation backbones, including RangeNet++, SalsaNext, FIDNet, CENet, MinkowskiUNet, Cylinder3D, and SPVCNN, under both fully- and semi-supervised settings. The checkpoints will be available soon!
  • [2022.12] - The derivation of spatial-prior-based SSL is available here. Take a look! :memo:
  • [2022.08] - LaserMix achieves 1st place among the semi-supervised semantic segmentation leaderboards of nuScenes, SemanticKITTI, and ScribbleKITTI, based on Paper-with-Code. :bar_chart:
  • [2022.08] - We provide a video demo for visual comparisons on the SemanticKITTI val set. Take a look!
  • [2022.07] - Our paper is available on arXiv, click here to check it out. Code will be available soon!

Outline

Installation

Please refer to INSTALL.md for the installation details.

Data Preparation

Please refer to DATA_PREPARE.md for the details to prepare the 1nuScenes, 2SemanticKITTI, and 3ScribbleKITTI datasets.

Getting Started

Please refer to GET_STARTED.md to learn more usage about this codebase.

Video Demo

| Demo 1 | Demo 2| Demo 3 | | :-: | :-: | :-: | | | | | | Link :arrowheadingup: | Link :arrowheadingup: | Link :arrowheadingup: |

Main Result

Framework Overview

Range View

Method nuScenes SemanticKITTI ScribbleKITTI
1% 10% 20% 50% 1% 10% 20% 50% 1% 10% 20% 50%
Sup.-only 38.3 57.5 62.7 67.6 36.2 52.2 55.9 57.2 33.1 47.7 49.9 52.5
LaserMix 49.568.270.673.0 43.458.859.461.4 38.354.455.658.7
improv. +11.2 +10.7 +7.9 +5.4 +7.2 +6.6 +3.5 +4.2 +5.2 +6.7 +5.7 +6.2
LaserMix++
improv.

Voxel

Method nuScenes SemanticKITTI ScribbleKITTI
1% 10% 20% 50% 1% 10% 20% 50% 1% 10% 20% 50%
Sup.-only 50.9 65.9 66.6 71.2 45.4 56.1 57.8 58.7 39.2 48.0 52.1 53.8
LaserMix 55.3 69.9 71.8 73.2 50.6 60.0 61.9 62.3 44.2 53.7 55.1 56.8
improv. +4.4 +4.0 +5.2 +2.0 +5.2 +3.9 +4.1 +3.6 +5.0 +5.7 +3.0 +3.0
LaserMix++
improv.

Ablation Studies

Qualitative Examples

qualitative

Checkpoints & More Results

For more experimental results and pretrained weights, please refer to RESULT.md.

TODO List

  • [x] Initial release. :rocket:
  • [x] Add license. See here for more details.
  • [x] Add video demos :movie_camera:
  • [x] Add installation details.
  • [x] Add data preparation details.
  • [ ] Add evaluation details.
  • [ ] Add training details.

Citation

If you find this work helpful, please kindly consider citing our paper:

bibtex @inproceedings{kong2023lasermix, title = {LaserMix for Semi-Supervised LiDAR Semantic Segmentation}, author = {Kong, Lingdong and Ren, Jiawei and Pan, Liang and Liu, Ziwei}, booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages = {21705--21715}, year = {2023}, }

License

Creative Commons License
This work is under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Acknowledgement

This work is developed based on the MMDetection3D codebase.


MMDetection3D is an open-source toolbox based on PyTorch, towards the next-generation platform for general 3D perception. It is a part of the OpenMMLab project developed by MMLab.

We acknowledge the use of the following public resources during the course of this work: 1nuScenes, 2nuScenes-devkit, 3SemanticKITTI, 4SemanticKITTI-API, 5ScribbleKITTI, 6FIDNet, 7CENet, 8SPVNAS, 9Cylinder3D, 10TorchSemiSeg, 11MixUp, 12CutMix, 13CutMix-Seg, 14CBST, 15MeanTeacher, and 16Cityscapes.

We would like to thank Fangzhou Hong for the insightful discussions and feedback. ❤️

Owner

  • Name: Lingdong Kong
  • Login: ldkong1205
  • Kind: user
  • Location: Singapore
  • Company: National University of Singapore

Ph.D. Student @ NUS Computing

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
  - name: "MMDetection3D Contributors"
title: "OpenMMLab's Next-generation Platform for General 3D Object Detection"
date-released: 2020-07-23
url: "https://github.com/open-mmlab/mmdetection3d"
license: Apache-2.0

GitHub Events

Total
  • Issues event: 9
  • Watch event: 23
  • Issue comment event: 15
Last Year
  • Issues event: 9
  • Watch event: 23
  • Issue comment event: 15

Dependencies

docker/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
docker/serve/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
projects/BEVFusion/setup.py pypi
projects/DSVT/setup.py pypi
requirements/build.txt pypi
requirements/docs.txt pypi
  • docutils ==0.16.0
  • markdown >=3.4.0
  • myst-parser *
  • sphinx ==4.0.2
  • sphinx-tabs *
  • sphinx_copybutton *
  • sphinx_markdown_tables >=0.0.16
  • tabulate *
  • urllib3 <2.0.0
requirements/mminstall.txt pypi
  • mmcv >=2.0.0rc4,<2.1.0
  • mmdet >=3.0.0,<3.2.0
  • mmengine >=0.7.1,<1.0.0
requirements/optional.txt pypi
  • black ==20.8b1
  • typing-extensions *
  • waymo-open-dataset-tf-2-6-0 *
requirements/readthedocs.txt pypi
  • mmcv >=2.0.0rc4
  • mmdet >=3.0.0
  • mmengine >=0.7.1
  • torch *
  • torchvision *
requirements/runtime.txt pypi
  • lyft_dataset_sdk *
  • networkx >=2.5
  • numba *
  • numpy *
  • nuscenes-devkit *
  • open3d *
  • plyfile *
  • scikit-image *
  • tensorboard *
  • trimesh *
requirements/tests.txt pypi
  • codecov * test
  • flake8 * test
  • interrogate * test
  • isort * test
  • kwarray * test
  • parameterized * test
  • pytest * test
  • pytest-cov * test
  • pytest-runner * test
  • ubelt * test
  • xdoctest >=0.10.0 test
  • yapf * test
requirements.txt pypi
setup.py pypi