Skip to content

Jireh-Jam/R-MNet-Inpainting-keras

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

R-MNET-A-Perceptual-Adversarial-Network-for-Image-Inpainting in Keras

R-MNET: A Perceptual Adversarial Network for Image Inpainting. Jireh Jam, Connah Kendrick, Vincent Drouard, Kevin Walker, Gee-Sern Hsu, Moi Hoon Yap

Keras implementation of R-MNET model proposed at WACV2021.

https://arxiv.org/pdf/2008.04621.pdf

Architecture

Requirements

Download Trained Model For Inference

Download pre-trained model and create a director in the order "models/RMNet_WACV2021/" and save the pre-trained weight here before running the inpaint.py file. Note that we used quickdraw mask dataset and this can be altererd accordingly as per the script. All instructions are there. Download CelebA-HQ

Images dataset

Download Places2 Dataset and CelebA-HQ Dataset

Mask dataset

The training mask dataset used for training our model: QD-IMD: Quick Draw Irregular Mask Dataset
The NVIDIA's mask dataset is available here

Folder structure

After downloading the datasets, you should put create these folders into /images/train/train_images and /masks/train/train_masks. Place the images and masks in the train_images and train_masks respectively and it should be like

-- images
---- train
------ train_images
---- celebA_HQ_test
-- masks
---- train
------ train_masks
---- test_masks

/images/train/train_images and /masks/train/train_masks and place the images and masks in the train_images and train_masks respectively. Make sure the directory path is

--self.train_mask_dir='./masks/train/' 
--self.train_img_dir = './images/train/'
--test_img_dir ='./images/celebA_HQ_test/'
--test_mask_dir ='./masks/test_masks/'

Python requirements

  • Python 3.6
  • Tensorflow 1.13.1
  • keras 2.3.1
  • opencv
  • Numpy

Training and Testing scripts.

Use the run.py file to train the model and inpaint.py to test the model. We recommend training for 100 epochs as a benchmark based on the state-of-the-art used to compare with out model.

Code Reference

  1. Wasserstain GAN was implemented based on: Wasserstein GAN Keras
  2. Generative Multi-column Convolutional Neural Networks inpainting model in Keras : Image Inpainting via Generative Multi-column Convolutional Neural Networks
  3. Nvidia Mask Dataset, based on the paper: Image Inpainting for Irregular Holes Using Partial Convolutions

Citing this script

If you use this script, please consider citing R-MNet:

@inproceedings{jam2021r,
  title={R-mnet: A perceptual adversarial network for image inpainting},
  author={Jam, Jireh and Kendrick, Connah and Drouard, Vincent and Walker, Kevin and Hsu, Gee-Sern and Yap, Moi Hoon},
  booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
  pages={2714--2723},
  year={2021}
}
@article{jam2020r,
  title={R-MNet: A Perceptual Adversarial Network for Image Inpainting},
  author={Jam, Jireh and Kendrick, Connah and Drouard, Vincent and Walker, Kevin and Hsu, Gee-Sern and Yap, Moi Hoon},
  journal={arXiv preprint arXiv:2008.04621},
  year={2020}
}

About

R-MNET: A Perceptual Adversarial Network for Image Inpainting model in Keras

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages