resolution-upscaling-of-3d-time-of-flight-sensor
https://github.com/isc-zhaw/resolution-upscaling-of-3d-time-of-flight-sensor
Science Score: 57.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 2 DOI reference(s) in README -
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (12.6%) to scientific vocabulary
Keywords
Repository
Basic Info
Statistics
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
- Releases: 0
Topics
Metadata Files
README.md
Resolution Upscaling of 3D Time-of-Flight Sensor by Fusion with RGB Camera
Yannick Waelti, Matthias Ludwig, Josquin Rosset Teddy Loeliger,
Institute of Signal Processing and Wireless Communications (ISC),
ZHAW Zurich University of Applied Sciences
3D time-of-flight (3D ToF) cameras enable depth perception but typically suffer from low resolution. To increase the resolution of the 3D ToF depth map, a fusion approach with a high-resolution RGB camera featuring a new edge extrapolation algorithm is proposed, implemented and benchmarked here. Despite the presence of artifacts in the output, the resulting high-resolution depth maps exhibit very clean edges when compared to other state-of-the-art spatial upscaling methods. The new algorithm first interpolates the depth map of the 3D ToF camera and combines it with an RGB image to extract an edge map. The blurred edges of the depth map are then replaced by an extrapolation from neighboring pixels for the final high-resolution depth map. A custom 3D ToF and RGB fusion hardware is used to create a new 3D ToF dataset for evaluating the image quality of the upscaling approach. In addition, the algorithm is benchmarked using the Middlebury 2005 stereo vision dataset. The proposed edge extrapolation algorithm typically achieves an effective upscaling factor greater than 2 in both the x and y directions.
Setup
Dependencies
We recommend using the provided docker file to run our code. Use the below commands to build and run the container.
docker build -t tof_rgb_fusion:1.0 --build-arg USER_ID=$(id -u) --build-arg GROUP_ID=$(id -g) .
docker run --name tof_rgb_fusion --gpus all --mount type=bind,source=/path/to/repository,target=/ToF_RGB_Fusion -dt tof_rgb_fusion:1.0
Make sure to include the submodules by either cloning the repo with the --recursive option or running git submodule update --init --recursive if the repo has been cloned already
Datasets
ZHAW-ISC 3D ToF and RGB Fusion
Download the dataset from Zenodo and place the files under data/ZHAW_ISC
Middlebury
From within the dataset directory run the download_middlebury_2014.sh script to download the hole filled Middlebury 2005 and the original Middlebury 2014 datasets. To create the downscaled images, run dataset/create_middlebury_dataset.py from the repository root.
Methods
DADA Checkpoints
Get the model checkpoints for the DADA approach from their repository and extract the contents of the .zip file into the model_checkpoints/DADA folder.
AHMF
Get the model checkpoints from the official repository and place the files under model_checkpoints/AHMF.
To make all models loadable, change all kernel_size in the UpSampler and InvUpSampler to 5. Also, replace from collections import Iterable with from collections.abc import Iterable.
FDKN
To use the DKN and FDKN models, some changes need to be made to the code from the official repository. Add aligncorners=True to all calls of `F.gridsampleif you use a PyTorch version > 1.12
If you get aCUDNNSTATUSNOTSUPPORTEDerror, wrap theF.gridsamplestatus in a withtorch.backends.cudnn.flags(enabled=False):` statement
Evaluation
Run model_evaluation.py to get metrics and upscaled depthmaps for different approaches. Methods can be specified with the -m option (default: all) and upscaling factors can be specified with -s (one or multiple of x4, x8, x16 or x32).
Citation
@software{Waelti_Efficient_Depth_and,
author = {Waelti, Yannick and Ludwig, Matthias and Rosset, Josquin and Loeliger, Teddy},
license = {MIT},
title = {{Resolution Upscaling of 3D Time-of-Flight Sensor by Fusion with RGB Camera}},
url = {https://github.com/isc-zhaw/Resolution-Upscaling-of-3D-Time-of-Flight-Sensor}
}
Owner
- Name: isc-zhaw
- Login: isc-zhaw
- Kind: organization
- Repositories: 1
- Profile: https://github.com/isc-zhaw
Citation (CITATION.cff)
# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!
cff-version: 1.2.0
title: >-
Resolution Upscaling of 3D Time-of-Flight Sensor by Fusion with RGB Camera
message: >-
If you use this software, please cite it using the
metadata from this file.
type: software
authors:
- given-names: Yannick
family-names: Waelti
email: yannick.waelti@zhaw.ch
orcid: 'https://orcid.org/0009-0001-9529-7251'
affiliation: Zurich University of Applied Sciences
- given-names: Matthias
family-names: Ludwig
orcid: 'https://orcid.org/0009-0009-4586-8616'
affiliation: Zurich University of Applied Sciences
- given-names: Josquin
family-names: Rosset
affiliation: Zurich University of Applied Sciences
- given-names: Teddy
family-names: Loeliger
affiliation: Zurich University of Applied Sciences
repository-code: >-
https://github.com/isc-zhaw/Efficient-Depth-and-RGB-Camera-Fusion-Algorithm
abstract: >-
3D time-of-flight (3D ToF) cameras enable depth
perception but typically suffer from low resolution. To increase
the resolution of the 3D ToF depth map, a fusion approach with a
high-resolution RGB camera featuring a new edge extrapolation
algorithm is proposed, implemented and benchmarked here.
Despite the presence of artifacts in the output, the resulting high-
resolution depth maps exhibit very clean edges when compared
to other state-of-the-art spatial upscaling methods. The new
algorithm first interpolates the depth map of the 3D ToF camera
and combines it with an RGB image to extract an edge map.
The blurred edges of the depth map are then replaced by an
extrapolation from neighboring pixels for the final high-resolution
depth map. A custom 3D ToF and RGB fusion hardware is
used to create a new 3D ToF dataset for evaluating the image
quality of the upscaling approach. In addition, the algorithm is
benchmarked using the Middlebury 2005 stereo vision dataset.
The proposed edge extrapolation algorithm typically achieves an
effective upscaling factor greater than 2 in both the x and y
directions.
keywords:
- 3D Time-of-Flight (3D ToF)
- Sensor fusion
- Resolution upscaling
- Depth map upsampling
- Edge Detection
- Edge extrapolation
license: MIT
GitHub Events
Total
Last Year
Dependencies
- pytorch/pytorch 2.0.0-cuda11.7-cudnn8-devel build
- configargparse *
- cudatoolkit ==11.7
- h5py *
- matplotlib *
- opencv-contrib-python-headless *
- pytorch ==2.0.0
- scikit-image ==0.18.3
- scipy *
- segmentation-models-pytorch ==0.2
- setuptools ==59.5.0
- torchvision ==0.15.1
- tqdm *
- wandb *