superpoint_transformer

Official PyTorch implementation of Superpoint Transformer introduced in [ICCV'23] "Efficient 3D Semantic Segmentation with Superpoint Transformer" and SuperCluster introduced in [3DV'24 Oral] "Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering"

https://github.com/drprojects/superpoint_transformer

Science Score: 67.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 12 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org, nature.com, zenodo.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (8.0%) to scientific vocabulary

Keywords

3d 3dv2024 deep-learning efficient fast graph-clustering hierarchical iccv2023 lightweight panoptic-segmentation partition partitioning point-cloud pytorch semantic-segmentation superpoint transformer
Last synced: 6 months ago · JSON representation ·

Repository

Official PyTorch implementation of Superpoint Transformer introduced in [ICCV'23] "Efficient 3D Semantic Segmentation with Superpoint Transformer" and SuperCluster introduced in [3DV'24 Oral] "Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering"

Basic Info
  • Host: GitHub
  • Owner: drprojects
  • License: mit
  • Language: Python
  • Default Branch: master
  • Homepage:
  • Size: 26 MB
Statistics
  • Stars: 806
  • Watchers: 13
  • Forks: 106
  • Open Issues: 1
  • Releases: 0
Topics
3d 3dv2024 deep-learning efficient fast graph-clustering hierarchical iccv2023 lightweight panoptic-segmentation partition partitioning point-cloud pytorch semantic-segmentation superpoint transformer
Created over 2 years ago · Last pushed 8 months ago
Metadata Files
Readme Changelog License Citation

README.md

from torch.utils.data import DataLoader

Superpoint Transformer

python pytorch lightning hydra license

Official implementation for

Efficient 3D Semantic Segmentation with Superpoint Transformer (ICCV 2023)
arXiv DOI Project page Tutorial

Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering (3DV 2024 Oral)
arXiv DOI Project page

If you ❤️ or simply use this project, don't forget to give the repository a ⭐, it means a lot to us !

@article{robert2023spt, title={Efficient 3D Semantic Segmentation with Superpoint Transformer}, author={Robert, Damien and Raguet, Hugo and Landrieu, Loic}, journal={Proceedings of the IEEE/CVF International Conference on Computer Vision}, year={2023} } @article{robert2024scalable, title={Scalable 3D Panoptic Segmentation as Superpoint Graph Clustering}, author={Robert, Damien and Raguet, Hugo and Landrieu, Loic}, journal={Proceedings of the IEEE International Conference on 3D Vision}, year={2024} }


📌 Description

Superpoint Transformer

Superpoint Transformer (SPT) is a superpoint-based transformer 🤖 architecture that efficiently ⚡ performs semantic segmentation on large-scale 3D scenes. This method includes a fast algorithm that partitions 🧩 point clouds into a hierarchical superpoint structure, as well as a self-attention mechanism to exploit the relationships between superpoints at multiple scales.

| ✨ SPT in numbers ✨ | |:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| | 📊 **S3DIS 6-Fold** (76.0 mIoU) | | 📊 **KITTI-360 Val** (63.5 mIoU) | | 📊 **DALES** (79.6 mIoU) | | 🦋 **212k parameters** ([PointNeXt](https://github.com/guochengqian/PointNeXt) ÷ 200, [Stratified Transformer](https://github.com/dvlab-research/Stratified-Transformer) ÷ 40) | | ⚡ S3DIS training in **3h on 1 GPU** ([PointNeXt](https://github.com/guochengqian/PointNeXt) ÷ 7, [Stratified Transformer](https://github.com/dvlab-research/Stratified-Transformer) ÷ 70) | | ⚡ **Preprocessing x7 faster than [SPG](https://github.com/loicland/superpoint_graph)** | [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/efficient-3d-semantic-segmentation-with-1/3d-semantic-segmentation-on-s3dis)](https://paperswithcode.com/sota/3d-semantic-segmentation-on-s3dis?p=efficient-3d-semantic-segmentation-with-1) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/efficient-3d-semantic-segmentation-with-1/3d-semantic-segmentation-on-dales)](https://paperswithcode.com/sota/3d-semantic-segmentation-on-dales?p=efficient-3d-semantic-segmentation-with-1) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/efficient-3d-semantic-segmentation-with-1/semantic-segmentation-on-s3dis)](https://paperswithcode.com/sota/semantic-segmentation-on-s3dis?p=efficient-3d-semantic-segmentation-with-1) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/efficient-3d-semantic-segmentation-with-1/3d-semantic-segmentation-on-kitti-360)](https://paperswithcode.com/sota/3d-semantic-segmentation-on-kitti-360?p=efficient-3d-semantic-segmentation-with-1)

SuperCluster

SuperCluster is a superpoint-based architecture for panoptic segmentation of (very) large 3D scenes 🐘 based on SPT. We formulate the panoptic segmentation task as a scalable superpoint graph clustering task. To this end, our model is trained to predict the input parameters of a graph optimization problem whose solution is a panoptic segmentation 💡. This formulation allows supervising our model with per-node and per-edge objectives only, circumventing the need for computing an actual panoptic segmentation and associated matching issues at train time. At inference time, our fast parallelized algorithm solves the small graph optimization problem, yielding object instances 👥. Due to its lightweight backbone and scalable formulation, SuperCluster can process scenes of unprecedented scale at once, on a single GPU 🚀, with fewer than 1M parameters 🦋.

| ✨ SuperCluster in numbers ✨ | |:----------------------------------------------------------------------------------------:| | 📊 **S3DIS 6-Fold** (55.9 PQ) | | 📊 **S3DIS Area 5** (50.1 PQ) | | 📊 **ScanNet Val** (58.7 PQ) | | 📊 **KITTI-360 Val** (48.3 PQ) | | 📊 **DALES** (61.2 PQ) | | 🦋 **212k parameters** ([PointGroup](https://github.com/dvlab-research/PointGroup) ÷ 37) | | ⚡ S3DIS training in **4h on 1 GPU** | | ⚡ **7.8km²** tile of **18M** points in **10.1s** on **1 GPU** | [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/scalable-3d-panoptic-segmentation-with/panoptic-segmentation-on-s3dis)](https://paperswithcode.com/sota/panoptic-segmentation-on-s3dis?p=scalable-3d-panoptic-segmentation-with) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/scalable-3d-panoptic-segmentation-with/panoptic-segmentation-on-s3dis-area5)](https://paperswithcode.com/sota/panoptic-segmentation-on-s3dis-area5?p=scalable-3d-panoptic-segmentation-with) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/scalable-3d-panoptic-segmentation-with/panoptic-segmentation-on-scannetv2)](https://paperswithcode.com/sota/panoptic-segmentation-on-scannetv2?p=scalable-3d-panoptic-segmentation-with) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/scalable-3d-panoptic-segmentation-with/panoptic-segmentation-on-kitti-360)](https://paperswithcode.com/sota/panoptic-segmentation-on-kitti-360?p=scalable-3d-panoptic-segmentation-with) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/scalable-3d-panoptic-segmentation-with/panoptic-segmentation-on-dales)](https://paperswithcode.com/sota/panoptic-segmentation-on-dales?p=scalable-3d-panoptic-segmentation-with)


📰 Updates


💻 Environment requirements

This project was tested with: - Linux OS - 64G RAM - NVIDIA GTX 1080 Ti 11G, NVIDIA V100 32G, NVIDIA A40 48G - CUDA 11.8 and 12.1 - conda 23.3.1


🏗 Installation

Simply run install.sh to install all dependencies in a new conda environment named spt. ```bash

Creates a conda env named 'spt' env and installs dependencies

./install.sh ```

Note: See the Datasets page for setting up your dataset path and file structure.


🔩 Project structure

``` └── superpointtransformer │ ├── configs # Hydra configs │ ├── callbacks # Callbacks configs │ ├── data # Data configs │ ├── debug # Debugging configs │ ├── experiment # Experiment configs │ ├── extras # Extra utilities configs │ ├── hparamssearch # Hyperparameter search configs │ ├── hydra # Hydra configs │ ├── local # Local configs │ ├── logger # Logger configs │ ├── model # Model configs │ ├── paths # Project paths configs │ ├── trainer # Trainer configs │ │ │ ├── eval.yaml # Main config for evaluation │ └── train.yaml # Main config for training │ ├── data # Project data (see docs/datasets.md) │ ├── docs # Documentation │ ├── logs # Logs generated by hydra and lightning loggers │ ├── media # Media illustrating the project │ ├── notebooks # Jupyter notebooks │ ├── scripts # Shell scripts │ ├── src # Source code │ ├── data # Data structure for hierarchical partitions │ ├── datamodules # Lightning DataModules │ ├── datasets # Datasets │ ├── dependencies # Compiled dependencies │ ├── loader # DataLoader │ ├── loss # Loss │ ├── metrics # Metrics │ ├── models # Model architecture │ ├── nn # Model building blocks │ ├── optim # Optimization │ ├── transforms # Functions for transforms, pre-transforms, etc │ ├── utils # Utilities │ ├── visualization # Interactive visualization tool │ │ │ ├── eval.py # Run evaluation │ └── train.py # Run training │ ├── tests # Tests of any kind │ ├── .env.example # Example of file for storing private environment variables ├── .gitignore # List of files ignored by git ├── .pre-commit-config.yaml # Configuration of pre-commit hooks for code formatting ├── install.sh # Installation script ├── LICENSE # Project license └── README.md

```

Note: See the Datasets page for further details on data/.

Note: See the Logs page for further details on logs/.


🚀 Usage

Datasets

See the Datasets page to set up your datasets.

Evaluation

Use the following command structure for evaluating our models from a checkpoint file checkpoint.ckpt, where <task> should be semantic for using SPT and panoptic for using SuperCluster:

```bash

Evaluate for segmentation on

python src/eval.py experiment=/ ckpt_path=/path/to/your/checkpoint.ckpt ```

Some examples:

```bash

Evaluate SPT on S3DIS Fold 5

python src/eval.py experiment=semantic/s3dis datamodule.fold=5 ckpt_path=/path/to/your/checkpoint.ckpt

Evaluate SPT on KITTI-360 Val

python src/eval.py experiment=semantic/kitti360 ckpt_path=/path/to/your/checkpoint.ckpt

Evaluate SPT on DALES

python src/eval.py experiment=semantic/dales ckpt_path=/path/to/your/checkpoint.ckpt

Evaluate SuperCluster on S3DIS Fold 5

python src/eval.py experiment=panoptic/s3dis datamodule.fold=5 ckpt_path=/path/to/your/checkpoint.ckpt

Evaluate SuperCluster on S3DIS Fold 5 with {wall, floor, ceiling} as 'stuff'

python src/eval.py experiment=panoptic/s3diswithstuff datamodule.fold=5 ckpt_path=/path/to/your/checkpoint.ckpt

Evaluate SuperCluster on ScanNet Val

python src/eval.py experiment=panoptic/scannet ckpt_path=/path/to/your/checkpoint.ckpt

Evaluate SuperCluster on KITTI-360 Val

python src/eval.py experiment=panoptic/kitti360 ckpt_path=/path/to/your/checkpoint.ckpt

Evaluate SuperCluster on DALES

python src/eval.py experiment=panoptic/dales ckpt_path=/path/to/your/checkpoint.ckpt ```

Note:

The pretrained weights of the SPT and SPT-nano models for S3DIS 6-Fold, KITTI-360 Val, and DALES are available at:

DOI

The pretrained weights of the SuperCluster models for S3DIS 6-Fold, S3DIS 6-Fold with stuff, ScanNet Val, KITTI-360 Val, and DALES are available at:

DOI

Training

Use the following command structure for train our models on a 32G-GPU, where <task> should be semantic for using SPT and panoptic for using SuperCluster:

```bash

Train for segmentation on

python src/train.py experiment=/ ```

Some examples:

```bash

Train SPT on S3DIS Fold 5

python src/train.py experiment=semantic/s3dis datamodule.fold=5

Train SPT on KITTI-360 Val

python src/train.py experiment=semantic/kitti360

Train SPT on DALES

python src/train.py experiment=semantic/dales

Train SuperCluster on S3DIS Fold 5

python src/train.py experiment=panoptic/s3dis datamodule.fold=5

Train SuperCluster on S3DIS Fold 5 with {wall, floor, ceiling} as 'stuff'

python src/train.py experiment=panoptic/s3diswithstuff datamodule.fold=5

Train SuperCluster on ScanNet Val

python src/train.py experiment=panoptic/scannet

Train SuperCluster on KITTI-360 Val

python src/train.py experiment=panoptic/kitti360

Train SuperCluster on DALES

python src/train.py experiment=panoptic/dales ```

Use the following to train on a 11G-GPU 💾 (training time and performance may vary):

```bash

Train SPT on S3DIS Fold 5

python src/train.py experiment=semantic/s3dis_11g datamodule.fold=5

Train SPT on KITTI-360 Val

python src/train.py experiment=semantic/kitti360_11g

Train SPT on DALES

python src/train.py experiment=semantic/dales_11g

Train SuperCluster on S3DIS Fold 5

python src/train.py experiment=panoptic/s3dis_11g datamodule.fold=5

Train SuperCluster on S3DIS Fold 5 with {wall, floor, ceiling} as 'stuff'

python src/train.py experiment=panoptic/s3diswithstuff_11g datamodule.fold=5

Train SuperCluster on ScanNet Val

python src/train.py experiment=panoptic/scannet_11g

Train SuperCluster on KITTI-360 Val

python src/train.py experiment=panoptic/kitti360_11g

Train SuperCluster on DALES

python src/train.py experiment=panoptic/dales_11g ```

Note: Encountering CUDA Out-Of-Memory errors 💀💾 ? See our dedicated troubleshooting section.

Note: Other ready-to-use configs are provided in configs/experiment/. You can easily design your own experiments by composing configs: ```bash

Train Nano-3 for 50 epochs on DALES

python src/train.py datamodule=dales model=nano-3 trainer.max_epochs=50 ``` See Lightning-Hydra for more information on how the config system works and all the awesome perks of the Lightning+Hydra combo.

Note: By default, your logs will automatically be uploaded to Weights and Biases, from where you can track and compare your experiments. Other loggers are available in configs/logger/. See Lightning-Hydra for more information on the logging options.

PyTorch Lightning predict()

Both SPT and SuperCluster inherit from LightningModule and implement predict_step(), which permits using PyTorch Lightning's Trainer.predict() mechanism.

```python from src.models.semantic import SemanticSegmentationModule from src.datamodules.s3dis import S3DISDataModule from pytorch_lightning import Trainer

Predict behavior for semantic segmentation from a torch DataLoader

dataloader = DataLoader(...) model = SemanticSegmentationModule(...) trainer = Trainer(...) batch, output = trainer.predict(model=model, dataloaders=dataloader) ```

This, however, still requires you to instantiate a Trainer, a DataLoader, and a model with relevant parameters.

For a little more simplicity, all our datasets inherit from LightningDataModule and implement predict_dataloader() by pointing to their corresponding test set by default. This permits directly passing a datamodule to PyTorch Lightning's Trainer.predict() without explicitly instantiating a DataLoader.

```python from src.models.semantic import SemanticSegmentationModule from src.datamodules.s3dis import S3DISDataModule from pytorch_lightning import Trainer

Predict behavior for semantic segmentation on S3DIS

datamodule = S3DISDataModule(...) model = SemanticSegmentationModule(...) trainer = Trainer(...) batch, output = trainer.predict(model=model, datamodule=datamodule) ```

For more details on how to instantiate these, as well as the output format of our model, we strongly encourage you to play with our demo notebook and have a look at the src/eval.py script.

Full-resolution predictions

By design, our models only need to produce predictions for the superpoints of the $P_1$ partition level during training. All our losses and metrics are formulated as superpoint-wise objectives. This conveniently saves compute and memory at training and evaluation time.

At inference time, however, we often need the predictions on the voxels of the $P_0$ partition level or on the full-resolution input point cloud. To this end, we provide helper functions to recover voxel-wise and full-resolution predictions.

See our demo notebook for more details on these.

Using a pretrained model on custom data

For running a pretrained model on your own point cloud, please refer to our tutorial slides, notebook, and video.

Parametrizing the superpoint partition on custom data

Our hierarchical superpoint partition is computed at preprocessing time. Its construction involves several steps whose parametrization must be adapted to your specific dataset and task. Please refer to our tutorial slides, notebook, and video for better understanding this process and tuning it to your needs.

Parameterizing SuperCluster graph clustering

One specificity of SuperCluster is that the model is not trained to explicitly do panoptic segmentation, but to predict the input parameters of a superpoint graph clustering problem whose solution is a panoptic segmentation.

For this reason, the hyperparameters for this graph optimization problem are selected after training, with a grid search on the training or validation set. We find that fairly similar hyperparameters yield the best performance on all our datasets (see our paper's appendix). Yet, you may want to explore these hyperparameters for your own dataset. To this end, see our demo notebook for parameterizing the panoptic segmentation.

Notebooks & visualization

We provide notebooks to help you get started with manipulating our core data structures, configs loading, dataset and model instantiation, inference on each dataset, and visualization.

In particular, we created an interactive visualization tool ✨ which can be used to produce shareable HTMLs. Demos of how to use this tool are provided in the notebooks. Additionally, examples of such HTML files are provided in media/visualizations.7z


📚 Documentation

| Location | Content | |:--------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------| | README | General introduction to the project | | docs/data_structures | Introduction to the core data structures of this project: Data, NAG, Cluster, and InstanceData | | docs/datasets | Introduction to our implemented datasets, to our BaseDataset class, and how to create your own dataset inheriting from it | | docs/logging | Introduction to logging and the project's logs/ structure | | docs/visualization | Introduction to our interactive 3D visualization tool |

Note: We endeavoured to comment our code as much as possible to make this project usable. If you don't find the answer you are looking for in the docs/, make sure to have a look at the source code and past issues. Still, if you find some parts are unclear or some more documentation would be needed, feel free to let us know by creating an issue !


👩‍🔧 Troubleshooting

Here are some common issues and tips for tackling them.

SPT or SuperCluster on an 11G-GPU

Our default configurations are designed for a 32G-GPU. Yet, SPT and SuperCluster can run on an 11G-GPU 💾, with minor time and performance variations.

We provide configs in configs/experiment/semantic for training SPT on an 11G-GPU 💾:

```bash

Train SPT on S3DIS Fold 5

python src/train.py experiment=semantic/s3dis_11g datamodule.fold=5

Train SPT on KITTI-360 Val

python src/train.py experiment=semantic/kitti360_11g

Train SPT on DALES

python src/train.py experiment=semantic/dales_11g ```

Similarly, we provide configs in configs/experiment/panoptic for training SuperCluster on an 11G-GPU 💾:

```bash

Train SuperCluster on S3DIS Fold 5

python src/train.py experiment=panoptic/s3dis_11g datamodule.fold=5

Train SuperCluster on S3DIS Fold 5 with {wall, floor, ceiling} as 'stuff'

python src/train.py experiment=panoptic/s3diswithstuff_11g datamodule.fold=5

Train SuperCluster on ScanNet Val

python src/train.py experiment=panoptic/scannet_11g

Train SuperCluster on KITTI-360 Val

python src/train.py experiment=panoptic/kitti360_11g

Train SuperCluster on DALES

python src/train.py experiment=panoptic/dales_11g ```

CUDA Out-Of-Memory Errors

Having some CUDA OOM errors 💀💾 ? Here are some parameters you can play with to mitigate GPU memory use, based on when the error occurs.

Parameters affecting CUDA memory. **Legend**: 🟡 Preprocessing | 🔴 Training | 🟣 Inference (including validation and testing during training) | Parameter | Description | When | |:--------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------:| | `datamodule.xy_tiling` | Splits dataset tiles into xy_tiling^2 smaller tiles, based on a regular XY grid. Ideal square-shaped tiles à la DALES. Note this will affect the number of training steps. | 🟡🟣 | | `datamodule.pc_tiling` | Splits dataset tiles into 2^pc_tiling smaller tiles, based on a their principal component. Ideal for varying tile shapes à la S3DIS and KITTI-360. Note this will affect the number of training steps. | 🟡🟣 | | `datamodule.max_num_nodes` | Limits the number of $P_1$ partition nodes/superpoints in the **training batches**. | 🔴 | | `datamodule.max_num_edges` | Limits the number of $P_1$ partition edges in the **training batches**. | 🔴 | | `datamodule.voxel` | Increasing voxel size will reduce preprocessing, training and inference times but will reduce performance. | 🟡🔴🟣 | | `datamodule.pcp_regularization` | Regularization for partition levels. The larger, the fewer the superpoints. | 🟡🔴🟣 | | `datamodule.pcp_spatial_weight` | Importance of the 3D position in the partition. The smaller, the fewer the superpoints. | 🟡🔴🟣 | | `datamodule.pcp_cutoff` | Minimum superpoint size. The larger, the fewer the superpoints. | 🟡🔴🟣 | | `datamodule.graph_k_max` | Maximum number of adjacent nodes in the superpoint graphs. The smaller, the fewer the superedges. | 🟡🔴🟣 | | `datamodule.graph_gap` | Maximum distance between adjacent superpoints int the superpoint graphs. The smaller, the fewer the superedges. | 🟡🔴🟣 | | `datamodule.graph_chunk` | Reduce to avoid OOM when `RadiusHorizontalGraph` preprocesses the superpoint graph. | 🟡 | | `datamodule.dataloader.batch_size` | Controls the number of loaded tiles. Each **train batch** is composed of `batch_size`*`datamodule.sample_graph_k` spherical samplings. Inference is performed on **entire validation and test tiles**, without spherical sampling. | 🔴🟣 | | `datamodule.sample_segment_ratio` | Randomly drops a fraction of the superpoints at each partition level. | 🔴 | | `datamodule.sample_graph_k` | Controls the number of spherical samples in the **train batches**. | 🔴 | | `datamodule.sample_graph_r` | Controls the radius of spherical samples in the **train batches**. Set to `sample_graph_r<=0` to use the entire tile without spherical sampling. | 🔴 | | `datamodule.sample_point_min` | Controls the minimum number of $P_0$ points sampled per superpoint in the **train batches**. | 🔴 | | `datamodule.sample_point_max` | Controls the maximum number of $P_0$ points sampled per superpoint in the **train batches**. | 🔴 | | `callbacks.gradient_accumulator.scheduling` | Gradient accumulation. Can be used to train with smaller batches, with more training steps. | 🔴 |


💳 Credits


Citing our work

If your work uses all or part of the present code, please include the following a citation:

``` @article{robert2023spt, title={Efficient 3D Semantic Segmentation with Superpoint Transformer}, author={Robert, Damien and Raguet, Hugo and Landrieu, Loic}, journal={Proceedings of the IEEE/CVF International Conference on Computer Vision}, year={2023} }

@article{robert2024scalable, title={Scalable 3D Panoptic Segmentation as Superpoint Graph Clustering}, author={Robert, Damien and Raguet, Hugo and Landrieu, Loic}, journal={Proceedings of the IEEE International Conference on 3D Vision}, year={2024} } ```

You can find our SPT paper 📄 and SuperCluster paper 📄 on arxiv.

Also, if you ❤️ or simply use this project, don't forget to give the repository a ⭐, it means a lot to us !

Owner

  • Name: Damien ROBERT
  • Login: drprojects
  • Kind: user
  • Location: France
  • Company: ENGIE Lab CRIGEN & IGN

PhD candidate at IGN and ENGIE Lab CRIGEN. I design deep learning methods for computer vision on 3D and 2D data.

Citation (CITATION.cff)

cff-version: 1.2.0
message: "Please cite our papers if you use this code in your own work."
title: "Superpoint Transformer"
authors:
- family-names: "Robert"
  given-names: "Damien"
- family-names: "Raguet"
  given-names: "Hugo"
- family-names: "Landrieu"
  given-names: "Loic"
date-released: 2023-06-15
license: MIT
url: "https://github.com/drprojects/superpoint_transformer"
preferred-citation:
  type: conference-paper
  authors:
  - family-names: "Robert"
    given-names: "Damien"
    orcid: https://orcid.org/0000-0003-0983-7053
  - family-names: "Raguet"
    given-names: "Hugo"
    orcid: https://orcid.org/0000-0002-4598-6710
  - family-names: "Landrieu"
    given-names: "Loic"
    orcid: https://orcid.org/0000-0002-7738-8141
  conference: "ICCV"
  title: "Efficient 3D Semantic Segmentation with Superpoint Transformer"
  year: 2023

GitHub Events

Total
  • Issues event: 63
  • Watch event: 253
  • Issue comment event: 96
  • Push event: 19
  • Pull request review comment event: 6
  • Pull request review event: 7
  • Pull request event: 5
  • Fork event: 33
Last Year
  • Issues event: 63
  • Watch event: 253
  • Issue comment event: 96
  • Push event: 19
  • Pull request review comment event: 6
  • Pull request review event: 7
  • Pull request event: 5
  • Fork event: 33

Committers

Last synced: 9 months ago

All Time
  • Total Commits: 92
  • Total Committers: 8
  • Avg Commits per committer: 11.5
  • Development Distribution Score (DDS): 0.185
Past Year
  • Commits: 38
  • Committers: 4
  • Avg Commits per committer: 9.5
  • Development Distribution Score (DDS): 0.184
Top Committers
Name Email Commits
drprojects d****1@w****r 75
romain janvier r****r@h****r 5
Shanci-Li 1****1@q****m 3
gardiens p****z@g****m 2
Charles Gaydon 1****n 2
1a7r0ch3 h****t@l****g 2
macadam d****m@m****l 2
wl5719 w****9@e****m 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 184
  • Total pull requests: 12
  • Average time to close issues: 6 days
  • Average time to close pull requests: about 1 month
  • Total issue authors: 106
  • Total pull request authors: 6
  • Average comments per issue: 4.24
  • Average comments per pull request: 1.58
  • Merged pull requests: 8
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 40
  • Pull requests: 1
  • Average time to close issues: 3 days
  • Average time to close pull requests: 1 minute
  • Issue authors: 31
  • Pull request authors: 1
  • Average comments per issue: 2.23
  • Average comments per pull request: 0.0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • narges-tk (10)
  • jing-zhao9 (7)
  • meehirmhatrepy (6)
  • Wind010321 (6)
  • gyy520cyaowu (5)
  • hpc100 (4)
  • gardiens (4)
  • gvoysey (4)
  • yfq512 (4)
  • ImaneTopo (4)
  • hansoogithub (4)
  • mbendjilali (3)
  • MdRanaSarkar (3)
  • MenglinQiu (3)
  • zeejja (3)
Pull Request Authors
  • rjanvier (5)
  • 1a7r0ch3 (4)
  • gardiens (3)
  • CharlesGaydon (3)
  • BALA22-cyber (2)
  • mbendjilali (2)
Top Labels
Issue Labels
bug (7) question (3) enhancement (2) good first issue (1)
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads: unknown
  • Total dependent packages: 0
  • Total dependent repositories: 0
  • Total versions: 3
proxy.golang.org: github.com/drprojects/superpoint_transformer
  • Versions: 3
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent packages count: 6.5%
Average: 6.7%
Dependent repos count: 7.0%
Last synced: 6 months ago

Dependencies

.github/workflows/code-quality-main.yaml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
  • pre-commit/action v2.0.3 composite
.github/workflows/code-quality-pr.yaml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
  • pre-commit/action v2.0.3 composite
  • trilom/file-changes-action v1.2.4 composite
.github/workflows/test.yml actions
  • actions/checkout v3 composite
  • actions/checkout v2 composite
  • actions/setup-python v3 composite
  • actions/setup-python v2 composite
  • codecov/codecov-action v3 composite