Skip to content

Generate image inpainting with adversarial edge learning

License

Notifications You must be signed in to change notification settings

MoshPe/InpaintGAN

Repository files navigation

InpaintGAN

Introduction

This paper presents a novel approach to the image inpainting problem using a Generative Adversarial Network (GAN) architecture. We undertook a formal optimization process on an existing EdgeConnect model to improve its performance in a specific domain.

The method proposed in this paper achieved a precision of 28%, recall of 25% and, feature matching loss of 25% while keeping the number of iterations at bare minimum of 175,000. In contrast, EdgeConnect [1] achieved precision of 27%, recall of 25% and, feature matching loss of 45%. Our model can be useful for various image editing tasks such as image completion, object removal and image restoration, where fine details are crucial. Our approach addresses the limitation of current state-of-the-art models in producing fine detailed images and opens new possibilities for image inpainting applications. The book is aimed at researchers, practitioners, and students in the fields of computer vision, image processing, and deep learning who are interested in learning about the latest advancements in image inpainting and how to improve current methods.

Prerequisites

  • Python 3
  • PyTorch 1.13
  • Eel 0.16
  • scikit-image
  • opencv 4.5.4.60

Optional

  • The development was utlizing PyCharm IDE by JetBrains

Installation

  • Cloning repo
git clone https://github.com/MoshPe/InpaintGAN.git
cd InpaintGAN
  • Install Python dependencies
pip install -r requirements.txt

Dataset

We use HumanFaces datasets. To train the model on the dataset please download the dataset from Kaggle website with your registered user.

Warning The dataset must contain only images with any type of jpg, jpeg, png and the folder path Must be in english !!!

Getting Started

Training

To train the model simply run main.py to open up Eel GUI to operate in system.

  • Head over to train tab and configure the model.

    image

  • Select what model to train, method of masking an image and which edge detection model to utilize

    image

  • Hit next for further configuring the model

    image

  • In the following section is the configuration of both generator and discriminator for Edge and Inpaint models accordingly.

    Option Default Description
    LR 0.0001 learning rate
    D2G_LR 0.1 discriminator/generator learning rate ratio
    BETA1 0.0 adam optimizer beta1
    BETA2 0.9 adam optimizer beta2
    BATCH_SIZE 8 input batch size
    INPUT_SIZE 256 input image size for training. (0 for original size)
    SIGMA 2 standard deviation of the Gaussian filter used in Canny edge detector
    (0: random, -1: no edge)
    MAX_ITERS 2e6 maximum number of iterations to train the model
    EDGE_THRESHOLD 0.5 edge detection threshold (0-1)
    L1_LOSS_WEIGHT 1 l1 loss weight
    FM_LOSS_WEIGHT 10 feature-matching loss weight
    STYLE_LOSS_WEIGHT 1 style loss weight
    CONTENT_LOSS_WEIGHT 1 perceptual loss weight
    INPAINT_ADV_LOSS_WEIGHT 0.01 adversarial loss weight
    GAN_LOSS nsgan nsgan: non-saturating gan, lsgan: least squares GAN, hinge: hinge loss GAN

    image

  • Running and training the model.

    2023-06-23.12-58-53.online-video-cutter.com.1.mp4

Inference

In this step the model is utilized for testing. Uploading and image, creating a mask on the uploaded image and run the model on the masked image.

1. Open Inference tab

image

2. Upload Image

image image

3. Mask image

In this section the user is able to draw the mask onto the image for the model to fill.
The user can choose between several thicknesses of lines to draw and clear the drawn lines. image image

4. Fill missing regions

image

Download Capstone PPT

For download the Capstone ppt please click here for a simpler download

Credit

Created and used with the help of Edge-LBAM article and github repo implementation.
Image Inpainting with Edge-guided Learnable Bidirectional Attention Maps
Created and used with the help of EdgeConnect article and github repo implementation.
EdgeConnect: Generative Image Inpainting with Adversarial Edge Learning
EdgeConnect: Structure Guided Image Inpainting using Edge Prediction:

Tal Yehoshua | Moshe Peretz
@inproceedings{nazeri2019edgeconnect,
  title={EdgeConnect: Generative Image Inpainting with Adversarial Edge Learning},
  author={Nazeri, Kamyar and Ng, Eric and Joseph, Tony and Qureshi, Faisal and Ebrahimi, Mehran},
  journal={arXiv preprint},
  year={2019},
}

@InProceedings{Nazeri_2019_ICCV,
  title = {EdgeConnect: Structure Guided Image Inpainting using Edge Prediction},
  author = {Nazeri, Kamyar and Ng, Eric and Joseph, Tony and Qureshi, Faisal and Ebrahimi, Mehran},
  booktitle = {The IEEE International Conference on Computer Vision (ICCV) Workshops},
  month = {Oct},
  year = {2019}
}