Science Score: 64.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org, zenodo.org -
✓Committers with academic emails
2 of 2 committers (100.0%) from academic institutions -
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (8.0%) to scientific vocabulary
Repository
Segmentation Networks with Uncertainty
Basic Info
Statistics
- Stars: 8
- Watchers: 2
- Forks: 0
- Open Issues: 0
- Releases: 0
Metadata Files
README.md
SCIM: Simultaneous Clustering, Inference, and Mapping for Open-World Semantic Scene Understanding
[paper]
This repository provides our evluation data, pretrained models, and implementations of the tested methods.
Model Checkpoints
To get started on experiments easily, we provide DeepLabv3+ checkpoints trained on first COCO and then ScanNet, but excluding different classes:
| Outlier Class | ScanNet Validation | Checkpoint |
|---|---|---|
| television | 43% mIoU | download |
| books + bookshelf | 42% mIoU | download |
| towel | 41% mIoU | download |
You can also load the models directly through torchhub:
python
import torch
no_tv = torch.hub.load('hermannsblum/scim:main', 'dv3res101_no_tv')
no_book = torch.hub.load('hermannsblum/scim:main', 'dv3res101_no_book')
no_towel = torch.hub.load('hermannsblum/scim:main', 'dv3res101_no_towel')
Evaluation Data
To automatically download and preprocess data, we use TFDS with a pytorch wrapper:
```python import tensorflowdatasets as tfds import semsegcluster.data.scannet from semsegcluster.data.tfdsto_torch import TFDataIterableDataset
data = tfds.load('scannet/scene035400', split='validation') torchdata = TFDataIterableDataset(data) ```
Method Implementations
Method implementations are split up into several steps for added flexibility. Below we describe the workflows for each method.
Nakajima
1. run inference ```bash python deeplab/scannet_inference.py with subset=$SCENE and pretrained_model=$MODEL ``` 2. run mapping (for flexibility, we run semantic mapping and uncertainty mapping separately) ```bash roslaunch panoptic_mapping_utils scannnet_mapping.launch scene:=$SCENE model:=$MODEL inference_path:=Uhlemeyer
1. run inference ```bash python deeplab/scannet_inference.py with subset=$SCENE and pretrained_model=$MODEL ``` 2. run meta-segmentation and clustering ```bash python3 deeplab/scannet_uhlemeyer.py with subset=$SCENE pretrained_model=$MODEL pred_name=pred uncert_name=maxlogit-pp eps=3.5 min_samples=10 ``` 3. train the segmentation model and run inference with the new model ```bash python deeplab/scannet_adaptation.py with subset=$SCENE and pretrained_model=$MODEL pseudolabels=uhlemeyerour approach to SCIM
1. run inference ```bash python deeplab/scannet_inference.py with subset=$SCENE and pretrained_model=$MODEL ``` 2. run mapping (for flexibility, we run semantic mapping and uncertainty mapping separately) ```bash roslaunch panoptic_mapping_utils scannnet_mapping.launch scene:=$SCENE model:=$MODEL inference_path:=Installation
We offer a dockerfile that installs the whole code-base into a container. To install individual parts, see below:
Clustering & Learning
This part is implemented in python. To install it, run:
bash
git clone https://github.com/hermannsblum/scim.git
cd scim && python -m pip install -e .
Mapping
For mapping, we rely on an existing mapping framework. This is implemented in ROS.
First, create a catkin workspace:
bash
sudo apt-get install python3-catkin-tools
mkdir -p ~/catkin_ws/src
cd ~/catkin_ws
catkin init
catkin config --extend /opt/ros/noetic
catkin config --cmake-args -DCMAKE_BUILD_TYPE=RelWithDebInfo
catkin config --merge-devel
Then install the framework into a catkin workspace:
bash
wstool init \
&& git clone --branch hermann-devel https://github.com/ethz-asl/panoptic_mapping.git \
&& wstool merge panoptic_mapping/panoptic_mapping_https.rosinstall \
&& wstool update -j8 \
&& catkin build panoptic_mapping_utils point_cloud_io
Data Structure
All intermediate outputs of different steps are stored to a folder. This folder needs to be set correctly in some places:
Add a file semsegcluster/settings.py with the following content:
python
EXPERIMENT_STORAGE_FOLDER = '<folder for experimental logs>'
TMPDIR = '/tmp'
TMP_DIR = '/tmp'
EXP_OUT = '<folder for outputs>'
The <folder for outputs> is also the one that should be used in the inference_path:= argument to the roslaunch files.
Experimental logs are stored with sacred. If instead of tracking them in a folder, you want to track them in a database, please add the following lines to settings.py:
python
EXPERIMENT_DB_HOST =
EXPERIMENT_DB_USER =
EXPERIMENT_DB_PWD =
EXPERIMENT_DB_NAME =
ScanNet
Unfortunately, the code does not yet directly download from ScanNet. Therefore, first download the relevant scenes as described here and then put them in a zip archive called valscans.zip that you store in ~/tensorflow_datasets/downloads/manual/valscans.zip. TFDS will then automatically extract, resize, and load the scenes.
Owner
- Name: Hermann
- Login: hermannsblum
- Kind: user
- Company: ETH Zürich
- Repositories: 8
- Profile: https://github.com/hermannsblum
PhD Student at the Autonomous Systems Lab
Citation (CITATION.cff)
cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
- family-names: "Blum"
given-names: "Hermann"
orcid: "https://orcid.org/0000-0002-1713-7877"
- family-names: "Müller"
given-names: "Marcus G"
- family-names: "Gawel"
given-names: "Abel"
- family-names: "Siegwart"
given-names: "Roland"
- family-names: "Cadena"
given-names: "Cesar"
title: "SCIM: Simultaneous Clustering, Inference, and Mapping for Open-World Semantic Scene Understanding"
version: 1.0.0
date-released: 2022-07-19
url: "https://arxiv.org/abs/2206.10670"
preferred-citation:
type: conference-paper
authors:
- family-names: "Blum"
given-names: "Hermann"
orcid: "https://orcid.org/0000-0002-1713-7877"
- family-names: "Müller"
given-names: "Marcus G"
- family-names: "Gawel"
given-names: "Abel"
- family-names: "Siegwart"
given-names: "Roland"
- family-names: "Cadena"
given-names: "Cesar"
title: "SCIM: Simultaneous Clustering, Inference, and Mapping for Open-World Semantic Scene Understanding"
year: 2022
GitHub Events
Total
Last Year
Committers
Last synced: 11 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Hermann | b****h@e****h | 210 |
| René Zurbrügg | z****e@s****h | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 9 months ago
All Time
- Total issues: 0
- Total pull requests: 3
- Average time to close issues: N/A
- Average time to close pull requests: 1 day
- Total issue authors: 0
- Total pull request authors: 2
- Average comments per issue: 0
- Average comments per pull request: 0.0
- Merged pull requests: 3
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
- hermannsblum (2)
- renezurbruegg (1)
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- gdown *
- hdbscan *
- hnswlib *
- incense *
- kornia *
- markov_clustering *
- numpy *
- open3d *
- pymongo ==3.12
- sacred *
- scikit-learn *
- scikit-optimize *
- torchmetrics *
- actions/checkout v3 composite
- hermannsblum/nvidia-ros noetic build