Skip to content
This repository has been archived by the owner on Mar 14, 2019. It is now read-only.

LynnHo/f-GAN-Tensorflow

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

f-GAN

Tensorflow implementation of f-GAN (NIPS 2016) - f-GAN: Training Generative Neural Samplers Using Variational Divergence Minimization.

TODO

  • make these divergences work (welcome the suggestions)
    • Kullback-Leibler with tricky G loss
    • Reverse-KL with tricky G loss
    • Pearson-X2 with tricky G loss
    • Squared-Hellinger with tricky G loss
    • Jensen-Shannon with tricky G loss
    • GAN with tricky G loss
  • test more divergence

Exemplar Results

  • Using tricky G loss (see Section 3.2 in the paper)

    Kullback-Leibler Reverse-KL Pearson-X2
    Squared-Hellinger Jensen-Shannon GAN
    NaN
  • Using theoretically correct G loss

    Kullback-Leibler Reverse-KL Pearson-X2
    Squared-Hellinger Jensen-Shannon GAN
    NaN

Usage

  • Prerequisites

    • tensorflow 1.7 or 1.8
    • python 2.7
  • Examples of training

    • training

      CUDA_VISIBLE_DEVICES=0 python train.py --dataset=mnist --divergence=Pearson-X2 --tricky_G
    • tensorboard for loss visualization

      CUDA_VISIBLE_DEVICES='' tensorboard --logdir ./output/mnist_Pearson-X2_trickyG/summaries --port 6006

Citation

If you find f-GAN useful in your research work, please consider citing:

@inproceedings{nowozin2016f,
  title={f-GAN: Training Generative Neural Samplers Using Variational Divergence Minimization},
  author={Nowozin, Sebastian and Cseke, Botond and Tomioka, Ryota},
  booktitle={Advances in Neural Information Processing Systems (NIPS)},
  year={2016}
}

About

f-GAN Tensorflow f-GAN: Training Generative Neural Samplers Using Variational Divergence Minimization

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages