Skip to content

🔥🔥 PyTorch implementation of "Progressive growing of GANs (PGGAN)" 🔥🔥

License

Notifications You must be signed in to change notification settings

jatinagrawal31/pggan-pytorch

 
 

Repository files navigation

Pytorch Implementation of "Progressive growing GAN (PGGAN)"

PyTorch implementation of PROGRESSIVE GROWING OF GANS FOR IMPROVED QUALITY, STABILITY, AND VARIATION
YOUR CONTRIBUTION IS INVALUABLE FOR THIS PROJECT :)

image

What's different from official paper?

  • original: trans(G)-->trans(D)-->stab / my code: trans(G)-->stab-->transition(D)-->stab
  • no use of NIN layer. The unnecessary layers (like low-resolution blocks) are automatically flushed out and grow.
  • used torch.utils.weight_norm for to_rgb_layer of generator.

How to use?

[step 1.] Prepare dataset
The author of progressive GAN released CelebA-HQ dataset, and I am working on it.
Before then please use CelebA to generate up to 256x256 face images. The CelebA-HQ dataloading would be supported very soon.

---------------------------------------------
The training data folder should look like : 
<train_data_root>
                |--CelebA
                        |--image1
                        |--image2
                        |--image3 ...
---------------------------------------------

[step 2.] Prepare environment using virtualenv

  • you can easily set PyTorch (v0.3) and TensorFlow environment using virtualenv.
  • CAUTION: if you have trouble installing PyTorch, install it mansually using pip. [PyTorch Install]
$ virtualenv --python=python2.7 venv
$ . venv/bin/activate
$ pip install -r requirements.txt
$ conda install pytorch torchvision -c pytorch

[step 3.] Run training

  • edit config.py to change parameters. (don't forget to change path to training images)
  • specify which gpu devices to be used, and change "n_gpu" option in config.py to support Multi-GPU training.
  • run and enjoy!
  (example)
  If using Single-GPU (device_id = 0):
  $ vim config.py   -->   change "n_gpu=1"
  $ CUDA_VISIBLE_DEVICES=0 python trainer.py
  
  If using Multi-GPUs (device id = 1,3,7):
  $ vim config.py   -->   change "n_gpu=3"
  $ CUDA_VISIBLE_DEVICES=1,3,7 python trainer.py

[step 4.] Display on tensorboard

  • you can check the results on tensorboard.

$ tensorboard --logdir repo/tensorboard --port 8888
$ <host_ip>:8888 at your browser.

[step 5.] Generate fake images using linear interpolation

CUDA_VISIBLE_DEVICES=0 python generate_interpolated.py

Experimental results

The result of higher resolution(larger than 256x256) will be updated soon.

Generated Images







Loss Curve

image

To-Do List (will be implemented soon)

  • Support WGAN-GP loss
  • training resuming functionality.
  • loading CelebA-HQ dataset (for 512x512 and 1024x0124 training)

Compatability

  • cuda v8.0
  • Tesla P40 (you may need more than 12GB Memory. If not, please adjust the batch_table in dataloader.py)

Acknowledgement

Author

MinchulShin, @nashory
image

About

🔥🔥 PyTorch implementation of "Progressive growing of GANs (PGGAN)" 🔥🔥

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%