Skip to content

Baccega/smartphone-based-rti

Repository files navigation

πŸπŸ“· Smartphone Based RTI

GitHub Python version PyTorch

Reflectance Transformation Imaging (RTI) using footage from two smartphones without requiring an expensive light dome, created in Python utilizing OpenCV .

πŸŽ₯ Input

sample_input

Footage by professor Filippo Bergamasco (Ca' Foscari University of Venice)

πŸ•Ή Output (interactive)

sample_output

This project is the assignment for the course Geometric and 3D Computer Vision 2020/2021.

See FinalProject.pdf for more details on the assignment and to download the required assets.

πŸ“¦ Downloading assets (CoinDataset)

Before running the scripts you need to download the required assets, the assets should include:

  • The calibration videos for both cameras
  • The footage from the static camera
  • The footage from the moving camera

You need extract them inside a new folder called assets/coins in the root of the project. Your folder structure should look like this:

folder_structure

If you want you can change the location of the input files by changing the corresponding row on the file constants.py under the heading: COINS ASSETS FILE NAMES AND DELAY BETWEEN FOOTAGE.

πŸ“¦ Downloading assets (SynthRTIDataset)

The scripts also supports the SynthRTIDataset from the paper "Neural Reflectance Transformation Imaging". To use it, in each folder the assets should include:

  • The images in jpg format
  • A file named: "dirs.lp"
  • An image called "normals.png" (Not required)

You need extract them inside a new folder called assets/synthRTI in the root of the project. Your folder structure should look like this:

folder_structure

πŸ”§ Usage

After downloading the assets you can just run this commands and follow the TUI:

python3 camera_calibrator.py        # Get camera intrinsics
python3 analysis.py                 # Get data or model from footage
python3 interactive_relighting.py   # View output

In the case of the machine learning models you can skip the interpolation step and compute the output in real time.

βš™οΈ Interpolation methods available

  • Linear RBF (From the scipy library)
  • Polinomial Texture Maps (Based on the homonymous paper from: Tom Malzbender, Dan Gelb, Hans Wolters)
  • PCA Model (Machine learning model based on the paper: On-the-go Reflectance Transformation Imaging with Ordinary Smartphones, from Mara Pistellato and Filippo Bergamasco)

πŸ”¬ Analysis debug modes descriptions

# Mode name Features
0 No debug -
1 Minimal debug Live footage, Current light direction, marker's contours
2 Full debug Minimal debug, Moving camera threshold, Warped moving frame, highlighted corners