Skip to content

complight/holobeam_multiholo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

HoloBeam: Paper-Thin Near-Eye Displays

Kaan Akşit and Yuta Itoh

[Website], [Manuscript]

Description

In this repository you will find the codebase for the learned model discussed in our work. This work extends our previous optimization Computer-Generated Holography (CGH) pipeline by converting it into a learned model. With this work, you can estimate a 3D hologram from a 2D input image without any depth map. So all a user needs is a 2D image to generate a hologram. This way, the most common media type images could be directly converted into 3D holograms, and their depths could be estimated by our algorithm in the hologram estimation process. If you need support beyond these README.md files, please do not hesitate to reach us using issues section.

Citation

If you find this repository useful for your research, please consider citing our work using the below BibTeX entry.

@ARTICLE{aksit2023holobeam,
  title    = "HoloBeam: Paper-Thin Near-Eye Displays",
  author   = "Akşit, Kaan and Itoh, Yuta",
  journal  = "IEEE VR 2023",
  month    =  March,
  year     =  2023,
  language = "en",
}

Getting started

This repository contains a code base for estimating holograms that can be used to generate multiplanar images without requiring depth information.

(0) Requirements

Before using this code in this repository, please make sure to have the right dependencies installed. In order to install the main dependency used in this project, please make sure to use the below syntax in a Unix/Linux shell:

pip3 install git+https://github.com/kaanaksit/odak

or

pip3 install odak

(1) Runtime

Once you have the main dependency installed, you can run the code base using the default settings by providing the below syntax:

git clone git@github.com:complight/holobeam_multiholo.git
cd holobeam_multiholo
python3 main.py

A trained model could be trialed using the following syntax:

python3 main.py --weights weights/weights.pt --settings settings/jasper.txt --input some_4k_image.png

Indeed make sure to change the locations of your weights, settings and inputs with the location of your weights, inputs and settings.

(2) Reconfiguring the code for your needs

Please consult the settings file found in settings/jasper.txt, where you will find a list of self descriptive variables that you can modify according to your needs. This way, you can create a new settings file or modify the existing one. By typing,

python3 main.py --help

You can reach to the information for training and estimating using this work.

If you are willing to use the code with another settings file, please use the following syntax:

python3 main.py --settings settings/sample.txt

Support

For more support regarding the code base, please use the issues section of this repository to raise issues and questions.