Skip to content

Damaged image restauration using a GAN model with a U-Net with skip connection autoencoder as a generator

Notifications You must be signed in to change notification settings

acuiram/DCGAN-with-U-Net

Repository files navigation

DCGAN-with-U-Net-Autoencoder

This project aims to restore damaged images using a GAN model with a U-Net with skip connection autoencoder as a generator and was used for my Bachelor's Degree thesis. For this project, I created my own database with artificially damaged images and reconstructed them using the DCGAN

Generator Description: Due to the nature of the proposed subject, implementing a classical generator that starts from a noise vector is not possible. An auto-encoder could receive as input the damaged images, transform them into the latent space and be trained to restore a similar image to the ground truth. The concatenated layers have the effect of sending information directly at the decoder network, hence reducing the amount passed through bottleneck layer. The strided convolution used in this study help reduce the number of the parameters, but are more efficient for this case than a classical U-Net with MaxPooling layers. Thus, this type of U-Net will represent the base of the generator.

Generator Flowchart:
generator flowchart

Discriminator Description: The discriminator network may be described as a function that translates image data into a probability: it classifies the images as being real(probability of one) or as being fake(probability of 0). The discriminator examines both the real images (training samples) and the generated images.

Discriminator Flowchart:
dicriminator flowchart

Although the database I created was not ideal, here are some of the reconstructed results:

result_2 (1)

About

Damaged image restauration using a GAN model with a U-Net with skip connection autoencoder as a generator

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages