Skip to content

StyleGAN and StyleGAN2 implementation for generating anime faces.

License

Notifications You must be signed in to change notification settings

maximkm/StyleGAN-anime

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

StyleGAN and StyleGAN2

An unofficial implementation of StyleGAN models for educational purposes, the task was to generate anime faces.

Now a universal loader is implemented for any standard models and loss functions. The StyleGAN config E (without mixing regulation), StyleGAN2 (weight demodulation + no growing, new G & D arch.) and R1GAN architectures are also implemented.

Prerequisites

It is recommended packages to run PyTorch version 1.9.0+cu102 or higher, Pillow, optional wandb.

Due to the limitations of git LFS, all pre-trained weights were uploaded to google drive.

Structure in the repository:

|   start.py               # Loader of models and dataset
|   starter.ipynb          # A notebook for experiments
|           
+---configs                # Configurations for running different models
|       R1GAN.json
|       StyleGAN.json
|       StyleGAN2.json
|       
+---models                 # Implementation of model architectures
|       R1GAN.py
|       StyleGAN.py
|       StyleGAN2.py
|
+---src
|       trainer.py         # Universal Trainer class for train loop
|       losses.py          # Loss functions
|
+---weight                 # Trained weights
\---utils                  # Support functions for working with images and models
    |   images.py
    |   register.py
    |   video.py
    |   weights.py
    |

Starting training

Example of starting model training:

python3 start.py "configs/StyleGAN.json" True

All training begins with running a script start.py, which is fed a json file, for example, "config/StyleGAN.json" and the next parameter is to enable or disable wandb, which is disabled by default.

Since a universal loader for GAN training was implemented, it was decided to transfer all the settings using a json file and start.py it takes the creation of a trainer and loading all the parameters from this config into it. For more detailed information on adding your models and your loss functions to the loader, see the wiki page universal loader.

Another way, for example, for training in jupiter notebook:

from start import init_train

Trainer = init_train("configs/StyleGAN.json", wandb_set=True)
Trainer.train_loop()

For more detailed information on configuring configs, see the wiki page configuring the config.

Examples of generation

Many launches were conducted for the selection of hyperparameters and testing, the result of training one of the StyleGAN launches with a resolution of 64x64 after 1 day of training on 4 x Tesla V100, on a dataset of 20k images with a resolution of 64x64, is shown below.

StyleGAN 64

Also, the result of training the StyleGAN model for a resolution of 256x256, which was trained for 4 days on a 4 x Tesla V100, on a dataset of 92k images with a resolution of 256x256, is shown below:

StyleGAN 256

The images are obtained by denormalization, so their saturation can be adjusted

Generating images after training

By default start.py he loads the most recent trained weights from the folder specified in the config, and you can get access to the generator as Trainer.G. In addition, various presets were implemented for quick actions with the generator, an example is given below:

from start import init_train
from utils.weights import LoadWeights
from utils.images import SaveImages

# Loading models and the latest weights without loading the dataset
Trainer = init_train("configs/StyleGAN.json", load_dataset=False)

# Loading custom weights with an inaccurate match
LoadWeights(Trainer, 'Weight/weight 42.pth')

# Save 10 randomly generated images to the img folder
SaveImages(Trainer, dir='img', cnt=10) 

Example of creating a video using the example of interpolation between two images:

from utils.video import GenerateVideo

samples = FromToVideo(z_1, z_2)  # A tensor consisting of image frames
GenerateVideo(samples)

interpolate

Detailed information on the implementation

To support multi GPU, DataParallel was used, instead of DistributedDataParallel, when testing StyleGAN on 4 x Tesla V100 GPU, a fairly high utilization was achieved:

To increase utilization, a written loader was also used, which loads the entire dataset into RAM, but because of this increases the initialization time.

Credits

Architectures

Loss Functions

  • WGAN-GP was taken from this implementation WGAN-GP

Anime datasets