https://github.com/complight/holobeam_multiholo

:goggles: HoloBeam: Paper-Thin Near-Eye Displays

https://github.com/complight/holobeam_multiholo

Science Score: 10.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (17.1%) to scientific vocabulary

Keywords

computational-display computer-generated-holography computer-graphic hologram multiplane multiplane-images odak phase-only torch
Last synced: 5 months ago · JSON representation

Repository

:goggles: HoloBeam: Paper-Thin Near-Eye Displays

Basic Info
Statistics
  • Stars: 12
  • Watchers: 1
  • Forks: 2
  • Open Issues: 0
  • Releases: 0
Topics
computational-display computer-generated-holography computer-graphic hologram multiplane multiplane-images odak phase-only torch
Created over 3 years ago · Last pushed about 2 years ago
Metadata Files
Readme License

README.md

HoloBeam: Paper-Thin Near-Eye Displays

Kaan Akşit and Yuta Itoh

[Website], [Manuscript]

Description

In this repository you will find the codebase for the learned model discussed in our work. This work extends our previous optimization Computer-Generated Holography (CGH) pipeline by converting it into a learned model. With this work, you can estimate a 3D hologram from a 2D input image without any depth map. So all a user needs is a 2D image to generate a hologram. This way, the most common media type images could be directly converted into 3D holograms, and their depths could be estimated by our algorithm in the hologram estimation process. If you need support beyond these README.md files, please do not hesitate to reach us using issues section.

Citation

If you find this repository useful for your research, please consider citing our work using the below BibTeX entry. bibtex @ARTICLE{aksit2023holobeam, title = "HoloBeam: Paper-Thin Near-Eye Displays", author = "Akşit, Kaan and Itoh, Yuta", journal = "IEEE VR 2023", month = March, year = 2023, language = "en", }

Getting started

This repository contains a code base for estimating holograms that can be used to generate multiplanar images without requiring depth information.

(0) Requirements

Before using this code in this repository, please make sure to have the right dependencies installed. In order to install the main dependency used in this project, please make sure to use the below syntax in a Unix/Linux shell:

bash pip3 install git+https://github.com/kaanaksit/odak

or

bash pip3 install odak

(1) Runtime

Once you have the main dependency installed, you can run the code base using the default settings by providing the below syntax:

bash git clone git@github.com:complight/holobeam_multiholo.git bash cd holobeam_multiholo bash python3 main.py

A trained model could be trialed using the following syntax:

bash python3 main.py --weights weights/weights.pt --settings settings/jasper.txt --input some_4k_image.png

Indeed make sure to change the locations of your weights, settings and inputs with the location of your weights, inputs and settings.

(2) Reconfiguring the code for your needs

Please consult the settings file found in settings/jasper.txt, where you will find a list of self descriptive variables that you can modify according to your needs. This way, you can create a new settings file or modify the existing one. By typing, bash python3 main.py --help You can reach to the information for training and estimating using this work.

If you are willing to use the code with another settings file, please use the following syntax: bash python3 main.py --settings settings/sample.txt

Support

For more support regarding the code base, please use the issues section of this repository to raise issues and questions.

Owner

  • Name: Computational Light Laboratory
  • Login: complight
  • Kind: organization
  • Email: k.aksit@ucl.ac.uk
  • Location: United Kingdom

Research at the intersection of light, computation, graphics and perception.

GitHub Events

Total
Last Year

Dependencies

requirements.txt pypi
  • odak ==0.2.1