Skip to content

Repository for implementation of generative models with Tensorflow 1.x

Notifications You must be signed in to change notification settings

Kyushik/Generative-Model

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

87 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Generative Models


Contributors

MMC Lab GAN Study Group members


Implemented Paper List (20 Papers)

GAN

  1. [GAN] Generative Adversarial Networks
  2. [DCGAN] Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
  3. [LSGAN] Least Squares Generative Adversarial Networks
  4. [WGAN] Wasserstein GAN
  5. [WGAN_GP] Improved Training of Wasserstein GANs
  6. [CGAN] Conditional Generative Adversarial Nets
  7. [InfoGAN] Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets
  8. [HoloGAN] Unsupervised Learning of 3D Representations From Natural Images
  9. [SinGAN] Learning a Generative Model from a Single Natural Image
  10. [PGGAN] Progressive Growing of GANs for Improved Quality, Stability, and Variation
  11. [StyleGAN] A Style-Based Generator Architecture for Generative Adversarial Networks

Image-to-Image Translation

  1. [CycleGAN] Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks
  2. [AGGAN] Attention-Guided Generative Adversarial Networks for Unsupervised Image-to-Image Translation
  3. [StarGAN] Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation
  4. [DMIT] Multi-mapping Image-to-Image Translation via Learning Disentanglement

Interpretable GAN Latent

  1. Unsupervised Discovery of Interpretable Directions in the GAN Latent Space

VAE

  1. Auto-Encoding Variational Bayes (VAE)
  2. Beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
  3. Neural Discrete Representation Learning(VQ-VAE)

Application

  1. Adherent Raindrop Removal with Self-Supervised Attention Maps andSpatio-Temporal Generative Adversarial Networks

Our Results

GAN Results

1. GAN

MNIST


2. DCGAN

MNIST CelebA

3. LSGAN

MNIST CelebA

4. WGAN

MNIST CelebA

5. WGAN-GP

MNIST CelebA

6. Conditional GAN

MNIST


7. InfoGAN

MNIST


8. HoloGAN

CelebA


9. SinGAN

Balloon

Mountain

Starry Night


10. PGGAN

It took about 2 weeks on TITAN RTX and trained 600k images per stage.

1024x1024 images

Cherry picked images

Latent interpolation

Fixed latent

No cherry picked images


11. StyleGAN

CelebA HQ (512x512 images)

Selected images

Style Mixing with Latent Codes

Random Images


AFHQ (512x512 images)

Selected images

Style Mixing with Latent Codes

Random Images


Image-to-Image Translation Results

1. CycleGAN

Monet to Photo Photo to Monet
Horse to Zebra Zebra to Horse

2. AGGAN

Horse to Zebra Zebra to Horse

3. StarGAN

CelebA


4. DMIT

Summer2Winter


Interpretable GAN Latent

1. Unsupervised Discovery of Interpretable Directions in the GAN Latent Space

1) MNIST


VAE Results

1. VAE

Reconstruction

MNIST CelebA

Latent Space Interpolation (MNIST)

Latent Space Interpolation (CelebA)


2. Beta-VAE

Latent Space Interpolation: Beta = 10 (CelebA)

Latent Space Interpolation: Beta = 200 (CelebA)


3. VQ-VAE

Reconstruction (MNIST)

Input Reconstruction

Reconstruction (CelebA)

Input Reconstruction

PixelCNN Trained Latent Decoding

MNIST CelebA

Application Results

1. Raindrop Removal