scim

Segmentation Networks with Uncertainty

https://github.com/hermannsblum/scim

Science Score: 64.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org, zenodo.org
  • Committers with academic emails
    2 of 2 committers (100.0%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (8.0%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Segmentation Networks with Uncertainty

Basic Info
  • Host: GitHub
  • Owner: hermannsblum
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 14.4 MB
Statistics
  • Stars: 8
  • Watchers: 2
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created almost 5 years ago · Last pushed almost 3 years ago
Metadata Files
Readme Citation

README.md

SCIM: Simultaneous Clustering, Inference, and Mapping for Open-World Semantic Scene Understanding

[paper]

This repository provides our evluation data, pretrained models, and implementations of the tested methods.

Model Checkpoints

To get started on experiments easily, we provide DeepLabv3+ checkpoints trained on first COCO and then ScanNet, but excluding different classes:

Outlier ClassScanNet ValidationCheckpoint
television43% mIoUdownload
books + bookshelf42% mIoUdownload
towel41% mIoUdownload

You can also load the models directly through torchhub:

python import torch no_tv = torch.hub.load('hermannsblum/scim:main', 'dv3res101_no_tv') no_book = torch.hub.load('hermannsblum/scim:main', 'dv3res101_no_book') no_towel = torch.hub.load('hermannsblum/scim:main', 'dv3res101_no_towel')

Evaluation Data

To automatically download and preprocess data, we use TFDS with a pytorch wrapper:

```python import tensorflowdatasets as tfds import semsegcluster.data.scannet from semsegcluster.data.tfdsto_torch import TFDataIterableDataset

data = tfds.load('scannet/scene035400', split='validation') torchdata = TFDataIterableDataset(data) ```

Method Implementations

Method implementations are split up into several steps for added flexibility. Below we describe the workflows for each method.

Nakajima 1. run inference ```bash python deeplab/scannet_inference.py with subset=$SCENE and pretrained_model=$MODEL ``` 2. run mapping (for flexibility, we run semantic mapping and uncertainty mapping separately) ```bash roslaunch panoptic_mapping_utils scannnet_mapping.launch scene:=$SCENE model:=$MODEL inference_path:=/scannet_inference/$SCENE/$MODEL roslaunch panoptic_mapping_utils scannnet_uncertmap.launch scene:=$SCENE model:=$MODEL inference_path:=/scannet_inference/$SCENE/$MODEL ``` 3. render the maps ```bash panoptic_mapping_utils scannnet_predrender.launch scene:=$SCENE model:=$MODEL inference_path:=/scannet_inference/$SCENE/$MODEL panoptic_mapping_utils scannnet_voxelidrender.launch scene:=$SCENE model:=$MODEL inference_path:=/scannet_inference/$SCENE/$MODEL panoptic_mapping_utils scannnet_uncertrender.launch scene:=$SCENE model:=$MODEL inference_path:=/scannet_inference/$SCENE/$MODEL ``` 4. get the geometric features (we run 3DSmoothNet in a singularity container) ```bash singularity run --nv --bind $OUTPUTS/$SCENE/$MODEL/point_cloud_0.ply:/pc.ply --bind $OUTPUTS/$SCENE/$MODEL/smoothnet:/output --bind $SMOOTHNET_DATA/evaluate:/3DSmoothNet/data/evaluate --bind $SMOOTHNET_DATA/logs:/3DSmoothNet/logs --bind $SMOOTHNET_DATA/preprocessed:/preprocessed smoothnet.simg roslaunch panoptic_mapping_utils scannnet_geofeatures.launch scene:=$SCENE model:=$MODEL ``` 5. run parameter optimisation and clustering ```bash python3 deeplab/scannet_nakajima.py best_mcl_nakajima with subset=$SCENE pretrained_model=$MODEL n_calls=100 shard=20 ```
Uhlemeyer 1. run inference ```bash python deeplab/scannet_inference.py with subset=$SCENE and pretrained_model=$MODEL ``` 2. run meta-segmentation and clustering ```bash python3 deeplab/scannet_uhlemeyer.py with subset=$SCENE pretrained_model=$MODEL pred_name=pred uncert_name=maxlogit-pp eps=3.5 min_samples=10 ``` 3. train the segmentation model and run inference with the new model ```bash python deeplab/scannet_adaptation.py with subset=$SCENE and pretrained_model=$MODEL pseudolabels=uhlemeyer python deeplab/scannet_adaptedinference.py with training= subset=$SCENE ```
our approach to SCIM 1. run inference ```bash python deeplab/scannet_inference.py with subset=$SCENE and pretrained_model=$MODEL ``` 2. run mapping (for flexibility, we run semantic mapping and uncertainty mapping separately) ```bash roslaunch panoptic_mapping_utils scannnet_mapping.launch scene:=$SCENE model:=$MODEL inference_path:=/scannet_inference/$SCENE/$MODEL roslaunch panoptic_mapping_utils scannnet_uncertmap.launch scene:=$SCENE model:=$MODEL inference_path:=/scannet_inference/$SCENE/$MODEL ``` 3. render the maps ```bash panoptic_mapping_utils scannnet_predrender.launch scene:=$SCENE model:=$MODEL inference_path:=/scannet_inference/$SCENE/$MODEL panoptic_mapping_utils scannnet_voxelidrender.launch scene:=$SCENE model:=$MODEL inference_path:=/scannet_inference/$SCENE/$MODEL panoptic_mapping_utils scannnet_uncertrender.launch scene:=$SCENE model:=$MODEL inference_path:=/scannet_inference/$SCENE/$MODEL ``` 4. get the geometric features (we run 3DSmoothNet in a singularity container) ```bash singularity run --nv --bind $OUTPUTS/$SCENE/$MODEL/point_cloud_0.ply:/pc.ply --bind $OUTPUTS/$SCENE/$MODEL/smoothnet:/output --bind $SMOOTHNET_DATA/evaluate:/3DSmoothNet/data/evaluate --bind $SMOOTHNET_DATA/logs:/3DSmoothNet/logs --bind $SMOOTHNET_DATA/preprocessed:/preprocessed smoothnet.simg roslaunch panoptic_mapping_utils scannnet_geofeatures.launch scene:=$SCENE model:=$MODEL ``` 5. run parameter optimisation and clustering (here we combine segmentation feautures, geometric features, and DINO; see the `deeplab/` folder for different scripts combining different features) ```bash python3 deeplab/scannet_segandgeoanddino.py best_hdbscan with subset=$SCENE pretrained_model=$MODEL n_calls=200 cluster_selection_method=eom ``` 6. combine clustering and mapping into pseudolabels (`outlier` needs to be adjusted dependent on the clustering above) ```bash python deeplab/pseudolabel.py with subset=$SCENE and pretrained_model=$MODEL outlier=segandgeoanddinohdbscan ``` 7. train the segmentation model and run inference with the new model ```bash python deeplab/scannet_adaptation.py with subset=$SCENE and pretrained_model=$MODEL pseudolabels=merged-pseudolabel-pred-segandgeoanddinohdbscan python deeplab/scannet_adaptedinference.py with training= subset=$SCENE ```

Installation

We offer a dockerfile that installs the whole code-base into a container. To install individual parts, see below:

Clustering & Learning

This part is implemented in python. To install it, run: bash git clone https://github.com/hermannsblum/scim.git cd scim && python -m pip install -e .

Mapping

For mapping, we rely on an existing mapping framework. This is implemented in ROS.

First, create a catkin workspace: bash sudo apt-get install python3-catkin-tools mkdir -p ~/catkin_ws/src cd ~/catkin_ws catkin init catkin config --extend /opt/ros/noetic catkin config --cmake-args -DCMAKE_BUILD_TYPE=RelWithDebInfo catkin config --merge-devel

Then install the framework into a catkin workspace: bash wstool init \ && git clone --branch hermann-devel https://github.com/ethz-asl/panoptic_mapping.git \ && wstool merge panoptic_mapping/panoptic_mapping_https.rosinstall \ && wstool update -j8 \ && catkin build panoptic_mapping_utils point_cloud_io

Data Structure

All intermediate outputs of different steps are stored to a folder. This folder needs to be set correctly in some places:

Add a file semsegcluster/settings.py with the following content: python EXPERIMENT_STORAGE_FOLDER = '<folder for experimental logs>' TMPDIR = '/tmp' TMP_DIR = '/tmp' EXP_OUT = '<folder for outputs>' The <folder for outputs> is also the one that should be used in the inference_path:= argument to the roslaunch files.

Experimental logs are stored with sacred. If instead of tracking them in a folder, you want to track them in a database, please add the following lines to settings.py: python EXPERIMENT_DB_HOST = EXPERIMENT_DB_USER = EXPERIMENT_DB_PWD = EXPERIMENT_DB_NAME =

ScanNet

Unfortunately, the code does not yet directly download from ScanNet. Therefore, first download the relevant scenes as described here and then put them in a zip archive called valscans.zip that you store in ~/tensorflow_datasets/downloads/manual/valscans.zip. TFDS will then automatically extract, resize, and load the scenes.

Owner

  • Name: Hermann
  • Login: hermannsblum
  • Kind: user
  • Company: ETH Zürich

PhD Student at the Autonomous Systems Lab

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
- family-names: "Blum"
  given-names: "Hermann"
  orcid: "https://orcid.org/0000-0002-1713-7877"
- family-names: "Müller"
  given-names: "Marcus G"
- family-names: "Gawel"
  given-names: "Abel"
- family-names: "Siegwart"
  given-names: "Roland"
- family-names: "Cadena"
  given-names: "Cesar"
title: "SCIM: Simultaneous Clustering, Inference, and Mapping for Open-World Semantic Scene Understanding"
version: 1.0.0
date-released: 2022-07-19
url: "https://arxiv.org/abs/2206.10670"
preferred-citation:
  type: conference-paper
  authors:
  - family-names: "Blum"
    given-names: "Hermann"
    orcid: "https://orcid.org/0000-0002-1713-7877"
  - family-names: "Müller"
    given-names: "Marcus G"
  - family-names: "Gawel"
    given-names: "Abel"
  - family-names: "Siegwart"
    given-names: "Roland"
  - family-names: "Cadena"
    given-names: "Cesar"
  title: "SCIM: Simultaneous Clustering, Inference, and Mapping for Open-World Semantic Scene Understanding"
  year: 2022

GitHub Events

Total
Last Year

Committers

Last synced: 11 months ago

All Time
  • Total Commits: 211
  • Total Committers: 2
  • Avg Commits per committer: 105.5
  • Development Distribution Score (DDS): 0.005
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Hermann b****h@e****h 210
René Zurbrügg z****e@s****h 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 9 months ago

All Time
  • Total issues: 0
  • Total pull requests: 3
  • Average time to close issues: N/A
  • Average time to close pull requests: 1 day
  • Total issue authors: 0
  • Total pull request authors: 2
  • Average comments per issue: 0
  • Average comments per pull request: 0.0
  • Merged pull requests: 3
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
  • hermannsblum (2)
  • renezurbruegg (1)
Top Labels
Issue Labels
Pull Request Labels

Dependencies

setup.py pypi
  • gdown *
  • hdbscan *
  • hnswlib *
  • incense *
  • kornia *
  • markov_clustering *
  • numpy *
  • open3d *
  • pymongo ==3.12
  • sacred *
  • scikit-learn *
  • scikit-optimize *
  • torchmetrics *
.github/workflows/docker-image.yml actions
  • actions/checkout v3 composite
Dockerfile docker
  • hermannsblum/nvidia-ros noetic build