awesome-robust-depth-estimation

A curated list of awesome robust depth estimation papers

https://github.com/hitcslj/awesome-robust-depth-estimation

Science Score: 67.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 1 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org, springer.com
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (4.8%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

A curated list of awesome robust depth estimation papers

Basic Info
  • Host: GitHub
  • Owner: hitcslj
  • License: mit
  • Default Branch: main
  • Size: 5.53 MB
Statistics
  • Stars: 23
  • Watchers: 1
  • Forks: 1
  • Open Issues: 0
  • Releases: 0
Created about 2 years ago · Last pushed over 1 year ago
Metadata Files
Readme License Citation

README.md

awesome-robust-depth-estimation Awesome

A curated list of awesome robust depth estimation papers, inspired by awesome-NeRF.

teaser

How to submit a pull request?

Table of Contents

Survey

  • TODO

Papers

knowledge_map

darkness & adverse weather robust

multimodality - [Depth Estimation from Monocular Images and Sparse Radar Data](https://arxiv.org/abs/2010.00058), Lin et al., IROS 2020 | [github](https://github.com/brade31919/radar_depth) | [bibtext](./citations/deisr.txt) - [R4Dyn: Exploring Radar for Self-Supervised Monocular Depth Estimation of Dynamic Scenes](https://arxiv.org/abs/2108.04814), Gasperini et al., 3DV 2021 | [bibtext](./citations/R4Dyn.txt) - [Deep Depth Estimation From Thermal Image](https://openaccess.thecvf.com/content/CVPR2023/html/Shin_Deep_Depth_Estimation_From_Thermal_Image_CVPR_2023_paper.html), Shin et al., CVPR 2023 | [github](https://github.com/UkcheolShin/MS2-MultiSpectralStereoDataset) | [bibtext](./citations/DET.txt)
mirror robust - [Learning Depth Estimation for Transparent and Mirror Surfaces](https://arxiv.org/abs/2307.15052), Costanzino et al., ICCV 2023 | [github](https://github.com/CVLAB-Unibo/Depth4ToM-code#-learning-depth-estimation-for-transparent-and-mirror-surfaces-iccv-2023-) | [bibtext](./citations/Depth2M.txt)
pose robust - [Towards Scale-Aware, Robust, and Generalizable Unsupervised Monocular Depth Estimation by Integrating IMU Motion Dynamics](https://arxiv.org/abs/2207.04680), Zhang et al., ECCV 2022 | [github](https://github.com/SenZHANG-GitHub/ekf-imu-depth) | [bibtext](./citations/ekf-imu-depth.txt)
robust architecture - [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413), Ranftl et al., ICCV 2021 | [github](https://github.com/isl-org/DPT) | [bibtext](./citations/dpt.txt) - [MonoViT: Self-Supervised Monocular Depth Estimation with a Vision Transformer](https://arxiv.org/abs/2208.03543), Zhao et al., 3DV 2022 | [github](https://github.com/zxcqlf/MonoViT) | [bibtext](./citations/monovit.txt) - [LDM3D: Latent Diffusion Model for 3D](https://arxiv.org/abs/2305.10853), Stan et al., CVPRW 2023 | [huggingface](https://huggingface.co/Intel/ldm3d) | [bibtext](./citations/ldm3d.txt)
zero-shot depth estimation - [Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer](https://arxiv.org/abs/1907.01341), Ranftl et al., TPAMI 2020 | [github](https://github.com/isl-org/MiDaS) | [bibtext](./citations/midas.txt) - [ZoeDepth: Zero-shot Transfer by Combining Relative and Metric Depth](https://arxiv.org/abs/2302.12288), Bhat et al., arxiv 2023 | [github](https://github.com/isl-org/ZoeDepth) | [bibtext](./citations/zoedepth.txt) - [Towards Zero-Shot Scale-Aware Monocular Depth Estimation](https://arxiv.org/abs/2306.17253), Guizilini et al., ICCV 2023 | [github](https://github.com/tri-ml/vidar) | [bibtext](./citations/zerodepth.txt) - [The Surprising Effectiveness of Diffusion Models for Optical Flow and Monocular Depth Estimation](https://arxiv.org/abs/2306.01923), Saxena et al., NeurIPS 2023 | [bibtext](./citations/ddvm.txt) - [Metric3D: Towards Zero-shot Metric 3D Prediction from A Single Image](https://arxiv.org/abs/2307.10984), Yin et al., ICCV 2023 | [github](https://github.com/YvanYin/Metric3D) | [bibtext](./citations/metric3d.txt) - [MiDaS v3.1 -- A Model Zoo for Robust Monocular Relative Depth Estimation](https://arxiv.org/abs/2307.14460), Birkl et al., arxiv 2023 | [github](https://github.com/isl-org/MiDaS) | [bibtext](./citations/midas3.txt) - [Metric3Dv2: A Versatile Monocular Geometric Foundation Model for Zero-shot Metric Depth and Surface Normal Estimation](https://arxiv.org/abs/2404.15506), Hu et al., arxiv 2024 | [github](https://github.com/YvanYin/Metric3D) | [bibtext](./citations/metric3dv2.txt) - [Zero-Shot Metric Depth with a Field-of-View Conditioned Diffusion Model](https://arxiv.org/abs/2312.13252), Saxena et al., arxiv 2023 | [bibtext](./citations/fvcdm.txt) - [Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data](https://arxiv.org/abs/2401.10891), Yang et al., CVPR 2024 | [github](https://github.com/LiheYoung/Depth-Anything) | [bibtext](./citations/depthanything.txt) - [Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation](https://arxiv.org/abs/2312.02145), Ke et al., CVPR 2024 | [github](https://github.com/prs-eth/marigold) | [bibtext](./citations/marigold.txt) - [DepthFM: Fast Monocular Depth Estimation with Flow Matching](https://arxiv.org/abs/2403.13788), Gui et al., arxiv 2024 | [github](https://github.com/CompVis/depth-fm) | [bibtext](./citations/depthFM.txt) - [Depth Anything V2](https://arxiv.org/abs/2406.09414), Yang et al. arxiv 2024| [github](https://github.com/DepthAnything/Depth-Anything-V2) | [bibtext](./citations/depthanythingv2.txt) - [GeoWizard: Unleashing the Diffusion Priors for 3D Geometry Estimation from a Single Image](GeoWizard: Unleashing the Diffusion Priors for 3D Geometry Estimation from a Single Image), Fu et al., ECCV 2024 | [github](https://github.com/fuxiao0719/GeoWizard) | [bibtext](./citations/geowizard.txt)
cross camera & scene - [Learning to Recover 3D Scene Shape from a Single Image](https://arxiv.org/abs/2012.09365), Yin et al., CVPR 2021 | [github](https://github.com/aim-uofa/AdelaiDepth) | [bibtext](./citations/LeReS.txt) - [Adaptive Fusion of Single-View and Multi-View Depth for Autonomous Driving](https://arxiv.org/abs/2403.07535), Cheng et al., CVPR 2024 | [github](https://github.com/Junda24/AFNet) | [bibtext](./citations/AFNet.txt) - [SM4Depth: Seamless Monocular Metric Depth Estimation across Multiple Cameras and Scenes by One Model](https://arxiv.org/abs/2403.08556), Liu et al., arxiv 2024 | [github](https://github.com/1hao-Liu/SM4Depth) | [bibtext](./citations/sm4depth.txt)

Benchmarks and Datasets

Talks

  • TODO

Challenge

Implementations

  • TODO

License

awesome robust depth estimation is released under the MIT license.

Contact

Primary contact: hitcslj@stu.hit.edu.cn. You can also contact: maoyf1105@163.com.

Owner

  • Name: Jian Liu
  • Login: hitcslj
  • Kind: user
  • Location: Harbin, Heilongjiang, China
  • Company: Harbin Institute of Technology

PhD Student @ HIT | Research Intern @ Megvii-reseach

Citation (citations/3d2fool.txt)

@article{zheng2024physical,
  title={Physical 3D Adversarial Attacks against Monocular Depth Estimation in Autonomous Driving},
  author={Zheng, Junhao and Lin, Chenhao and Sun, Jiahao and Zhao, Zhengyu and Li, Qian and Shen, Chao},
  journal={arXiv preprint arXiv:2403.17301},
  year={2024}
}

GitHub Events

Total
  • Watch event: 6
Last Year
  • Watch event: 6