Skip to content

HzDmS/gaze_redirection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Photo-Realistic Monocular Gaze Redirection Using Generative Adversarial Networks

License: MIT

[Paper] [Video]

Authors: Zhe He, Adrian Spurr, Xucong Zhang, Otmar Hilliges

Contact: zhehe@student.ethz.ch

The following gifs are made of images generated by our method. For each GIF, the input is a still image.

Our method is also capable of handling different head poses.

Note

The code here is the development version. It can be used for training, but there might be some redundant code and compatiblity issues. The final version will be released soon.

Dependencies

tensorflow == 1.7
numpy == 1.13.1
scipy == 0.19.1

Dataset

The dataset contains eye patch images parsed from Columbia Gaze Dataset. It can be downloaded via this link.

tar -xvf dataset.tar

The dataset contains six subfolders, N30P/, N15P/, 0P/, P15P/, P30P/ and all/. Prefix 'N' means negative head pose, and 'P' means positive head pose. Folder all/ contains all eye patch images with different head poses.

VGG-16 pretrained weights

wget http://download.tensorflow.org/models/vgg_16_2016_08_28.tar.gz .
tar -xvf vgg_16_2016_08_28.tar.gz

Train

python main.py --mode train --data_path ./dataset/all/ --log_dir ./log/ --batch_size 32 --vgg_path ./vgg_16.ckpt

Test

To test the model on frontal faces, run the following command.

python main.py --mode eval --data_path ./dataset/0P/ --log_dir ./log/ --batch_size 21

Then, a folder named eval will be generated in folder ./log/. Generated images, input images and target images will be stored in eval/.

About

[Official Implementation] Photo-Realistic Monocular Gaze Redirection Using Generative Adversarial Networks, He et.al. ICCV 2019

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages