lim3d

πŸ”₯(CVPR 2023) Less is More: Reducing Task and Model Complexity for 3D Point Cloud Semantic Segmentation

https://github.com/l1997i/lim3d

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • βœ“
    CITATION.cff file
    Found CITATION.cff file
  • βœ“
    codemeta.json file
    Found codemeta.json file
  • βœ“
    .zenodo.json file
    Found .zenodo.json file
  • β—‹
    DOI references
  • βœ“
    Academic publication links
    Links to: arxiv.org
  • β—‹
    Academic email domains
  • β—‹
    Institutional organization owner
  • β—‹
    JOSS paper metadata
  • β—‹
    Scientific vocabulary similarity
    Low similarity (9.9%) to scientific vocabulary

Keywords

complexity computer-vision cvpr cvpr2023 deep-learning lidar point-cloud semantic-segmentation
Last synced: 6 months ago · JSON representation ·

Repository

πŸ”₯(CVPR 2023) Less is More: Reducing Task and Model Complexity for 3D Point Cloud Semantic Segmentation

Basic Info
Statistics
  • Stars: 92
  • Watchers: 3
  • Forks: 6
  • Open Issues: 2
  • Releases: 0
Topics
complexity computer-vision cvpr cvpr2023 deep-learning lidar point-cloud semantic-segmentation
Created almost 3 years ago · Last pushed over 1 year ago
Metadata Files
Readme License Citation

README.md

Durham CVF arXiv GitHub license PyTorch Stars

    

πŸ”₯ Less is More: Reducing Task and Model Complexity for 3D Point Cloud Semantic Segmentation [CVPR 2023]

Li Li, Hubert P. H. Shum and Toby P. Breckon, In Proc. International Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2023 [homepage] [pdf] [video] [poster]

https://github.com/l1997i/lim3d/assets/35445094/d52f9d80-c4dc-4147-af0c-6101ca6f6b0f

Abstract: Whilst the availability of 3D LiDAR point cloud data has significantly grown in recent years, annotation remains expensive and time-consuming, leading to a demand for semi-supervised semantic segmentation methods with application domains such as autonomous driving. Existing work very often employs relatively large segmentation backbone networks to improve segmentation accuracy, at the expense of computational costs. In addition, many use uniform sampling to reduce ground truth data requirements for learning needed, often resulting in sub-optimal performance. To address these issues, we propose a new pipeline that employs a smaller architecture, requiring fewer ground-truth annotations to achieve superior segmentation accuracy compared to contemporary approaches. This is facilitated via a novel Sparse Depthwise Separable Convolution module that significantly reduces the network parameter count while retaining overall task performance. To effectively sub-sample our training data, we propose a new Spatio-Temporal Redundant Frame Downsampling (ST-RFD) method that leverages knowledge of sensor motion within the environment to extract a more diverse subset of training data frame samples. To leverage the use of limited annotated data samples, we further propose a soft pseudo-label method informed by LiDAR reflectivity. Our method outperforms contemporary semi-supervised work in terms of mIoU, using less labeled data, on the SemanticKITTI (59.5@5%) and ScribbleKITTI (58.1@5%) benchmark datasets, based on a 2.3Γ— reduction in model parameters and 641Γ— fewer multiply-add operations whilst also demonstrating significant performance improvement on limited training data (i.e., Less is More).

News

[2024/01/30] We release the ST-RFD training split on SemanticKITTI dataset.
[2023/06/21] πŸ‡¨πŸ‡¦ We will present our work in West Building Exhibit Halls ABC 108 @ Wed 21 Jun 10:30 a.m. PDT β€” noon PDT. See you in Vancouver, Canada.
[2023/06/20] Code released.
[2023/02/27] LiM3D was accepted at CVPR 2023!

Data Preparation

The data is organized in the format of {SemanticKITTI} U {ScribbleKITTI}.

sequences/ β”œβ”€β”€ 00/ β”‚ β”œβ”€β”€ scribbles/ β”‚ β”‚ β”œ 000000.label β”‚ β”‚ β”œ 000001.label β”‚ β”‚ β”” .......label β”‚ β”œβ”€β”€ labels/ β”‚ β”œβ”€β”€ velodyne/ β”‚ β”œβ”€β”€ image_2/ β”‚ β”œβ”€β”€ image_3/ β”‚ β”œβ”€β”€ times.txt β”‚ β”œβ”€β”€ calib.txt β”‚ └── poses.txt β”œβ”€β”€ 01/ β”œβ”€β”€ 02/ . . └── 21/

SemanticKITTI

Please follow the instructions from SemanticKITTI to download the dataset including the KITTI Odometry point cloud data.

ScribbleKITTI

Please download ScribbleKITTI scribble annotations and unzip in the same directory. Each sequence in the train-set (00-07, 09-10) should contain the velodyne, labels and scribbles directories.

Move the sequences folder or make a symbolic link to a new directory inside the project dir called data/. Alternatively, edit the dataset: root_dir field of each config file to point to the sequences folder.

Environment Setup

For the installation, we recommend setting up a virtual environment using conda or venv:

For conda, shell conda env create -f environment.yaml conda activate lim3d pip install -r requirements.txt

For venv, shell python -m venv ~/venv/lim3d source ~/venv/scribblekitti/bin/activate pip install -r requirements.txt

Furthermore install the following dependencies: - pytorch (tested with version 1.10.1+cu111) - pytorch-lightning (tested with version 1.6.5) - torch-scatter (tested with version 2.0.9) - spconv (tested with version 2.1.21)

Experiments

Our overall architecture involves three stages (Figure 2). You can reproduce our results through the scripts provided in the experiments folder:

  1. Training: we utilize reflectivity-prior descriptors and adapt the Mean Teacher framework to generate high-quality pseudo-labels. Running with bash script: bash experiments/train.sh;
  2. Pseudo-labeling: we fix the trained teacher model prediction in a class-range-balanced manner, expanding dataset with Reflectivity-based Test Time Augmentation (Reflec-TTA) during test time. Running with bash script: bash experiments/crb.sh, then save the pseudo-labels bash experiments/save.sh;
  3. Distillation with unreliable predictions: we train on the generated pseudo-labels, and utilize unreliable pseudo-labels in a category-wise memory bank for improved discrimination. Running with bash script: bash experiments/dist-reflec.sh.

Results

Please refer to our supplementary video and supplementary documentation for more qualitative results.

You can download our pretrained models here via Onedrive.

To validate the results, please refer to the scripts in experiments folder, and put the pretrained models in the models folder. Specify CKPT_PATH and SAVE_DIR in predict.sh file.

For example, if you want to validate the results of 10% labeled training frames + LiM3D (without SDSC) + with reflectivity features on ScribbleKITTI, you can specify CKPT_PATH as model/sck_crb10_feat69_61.01.ckpt. Run following scripts:

bash bash experiments/predict.sh

NOTE: Sparse Depthwise Separable Convolution (SDSC)

We provide 2 variants on LiM3D. In network/modules/cylinder3d.py, 1. Normal SparseConv3d: Un-comment Line 5 (from network.modules.sparse_convolution import *)
2. SDSC: Un-comment Line 8 in (from network.modules.sds_convolution import *)

Our SDSC module is much more efficient with IPU (Intelligence Processing Unit) + PopTorch than normal GPU.

The SDSC module uses sparse group convolution (official SpConv), which is limited by memory bandwidth. Modern hardware depends on vector instructions for efficient dot product computations. Inefficiencies occur when these instructions aren’t fully utilized, causing potential FLOP wastage. Furthermore, if data isn’t immediately available to the compute engine, extra cycles are required for data transfer. This limitation’s impact is primarily influenced by memory bandwidth, which is likely the main constraint for the efficiency of our sparse depthwise and small-group convolutions.

Based on the above issues, we recommend using IPU (Intelligence Processing Unit) for SDSC training. IPUs are specifically designed to handle sparse data efficiently, with architecture that maximizes the utilization of vector instructions and reduces FLOP wastage. Their high in-processor memory bandwidth and low-latency memory access ensure that data is readily available to the compute engine, minimizing additional cycles for data transfer. This makes IPUs highly suitable for sparse depthwise and small-group convolutions, enhancing overall training efficiency.

As the future research direction, we are attempting to optimize the SDSC module with gpu-optimized architecture (refer to https://github.com/MegEngine/RepLKNet, https://github.com/dvlab-research/spconv-plus, etc), aiming to achieve breakthroughs and balance in accuracy, FLOPs, parameter size, and actual training time on normal GPU.

Citation

If you are making use of this work in any way, you must please reference the following paper in any report, publication, presentation, software release or any other associated materials:

Less is More: Reducing Task and Model Complexity for 3D Point Cloud Semantic Segmentation (Li Li, Hubert P. H. Shum and Toby P. Breckon), In IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), 2023. [homepage] [pdf] [video] [poster]

bibtex @InProceedings{li23lim3d, title = {Less Is {{More}}: {{Reducing Task}} and {{Model Complexity}} for {{3D Point Cloud Semantic Segmentation}}}, author = {Li, Li and Shum, Hubert P. H. and Breckon, Toby P.}, keywords = {point cloud, semantic segmentation, sparse convolution, depthwise separable convolution, autonomous driving}, year = {2023}, month = June, publisher = {{IEEE}}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, }


Acknowledgements

We would like to additionally thank the authors the open source codebase ScribbleKITTI, Cylinder3D, and U2PL.

Owner

  • Name: Li (Luis) Li
  • Login: l1997i
  • Kind: user
  • Location: London
  • Company: King's College London

πŸ‘¨πŸ»β€πŸ’» Postdoc @ King's College London β€’ PhD @ Durham University

Citation (CITATION.cff)

# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!

cff-version: 1.2.0
title: >-
  Less is More: Reducing Task and Model Complexity for 3D
  Point Cloud Semantic Segmentation
message: >-
  If you use this software, please cite it using the
  metadata from this file.
type: software
authors:
  - given-names: Li
    family-names: Li
    email: li.li4@durham.ac.uk
    affiliation: Durham University
    orcid: 'https://orcid.org/0000-0002-9392-7862'
  - given-names: Toby P.
    family-names: Breckon
    email: toby.breckon@durham.ac.uk
    affiliation: Durham University
    orcid: 'https://orcid.org/0000-0003-1666-7590'
  - given-names: Hubert P. H.
    family-names: Shum
    email: hubert.shum@durham.ac.uk
    affiliation: Durham University
    orcid: 'https://orcid.org/0000-0001-5651-6039'
identifiers:
  - type: doi
    value: 10.1109/CVPR52729.2023.00903
repository-code: 'https://github.com/l1997i/lim3d/'
url: 'https://project.luisli.org/lim3d/'
license: Apache-2.0
preferred-citation:
  type: conference-paper
  authors:
  - family-names: "Li"
    given-names: "Li"
    orcid: "https://orcid.org/0000-0002-9392-7862"
  - family-names: "Shum"
    given-names: "Hubert P. H."
    orcid: "https://orcid.org/0000-0001-5651-6039"
  - family-names: "Breckon"
    given-names: "Toby P."
    orcid: "https://orcid.org/0000-0003-1666-7590"
  doi: "10.1109/CVPR52729.2023.00903"
  collection-title: "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)"
  start: 9361
  end: 9371
  title: "Less is More: Reducing Task and Model Complexity for 3D Point Cloud Semantic Segmentation"
  year: 2023

GitHub Events

Total
  • Issues event: 3
  • Watch event: 7
  • Issue comment event: 2
Last Year
  • Issues event: 3
  • Watch event: 7
  • Issue comment event: 2

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 2
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 2
  • Total pull request authors: 0
  • Average comments per issue: 0.0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 2
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 2
  • Pull request authors: 0
  • Average comments per issue: 0.0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • cuge1995 (2)
  • Heathogata (1)
  • Odelllll (1)
  • Mehrdad-Hosseini1992 (1)
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels