Skip to content

Releases: POSTECH-CVLab/PyTorch-StudioGAN

v.0.4.0

05 Jul 18:09
e4c5d82
Compare
Choose a tag to compare
  • We checked the reproducibility of implemented GANs.
  • We provide Baby, Papa, and Grandpa ImageNet datasets where images are processed using the anti-aliasing and high-quality resizer.
  • StudioGAN provides a dedicatedly established Benchmark on standard datasets (CIFAR10, ImageNet, AFHQv2, and FFHQ).
  • StudioGAN supports InceptionV3, ResNet50, SwAV, DINO, and Swin Transformer backbones for GAN evaluation.

v.0.3.0

05 Nov 16:08
6650d2f
Compare
Choose a tag to compare
  • Add SOTA GANs: LGAN, TACGAN, StyleGAN2, MDGAN, MHGAN, ADCGAN, ReACGAN (our new paper).
  • Add five types of differentiable augmentation: CR, DiffAugment, ADA, SimCLR, BYOL.
  • Implement useful regularizations: Top-K training, Feature Matching, R1-Regularization, MaxGP
  • Add Improved Precision & Recall, Density & Coverage, iFID, and CAS for reliable evaluation.
  • Support Inception_V3 and SwAV backbones for GAN evaluation.
  • Verify the reproducibility of StyleGAN2 and BigGAN.
  • Fix bugs in FreezeD, DDP training, Mixed Precision training, and ADA.
  • Support Discriminator Driven Latent Sampling, Semantic Factorization for BigGAN evaluation.
  • Support Wandb logging instead of Tensorboard.

v0.2.0

23 Feb 14:33
Compare
Choose a tag to compare

Second release of StudioGAN with following features

  • Fix minor bugs (slow convergence of training GAN + ADA models, tracking bn statistics during evaluation, etc.)
  • Add multi-node DistributedDataParallel (DDP) training.
  • Comprehensive benchmarks on CIFAR10, Tiny_ImageNet, and ImageNet datasets.
  • Provide pre-trained models and log files for the future research.
  • Add LARS optimizer and TSNE analysis.

v0.1.0

07 Dec 03:02
276f2c5
Compare
Choose a tag to compare

First StudioGAN release with following features

  • Extensive GAN implementations for Pytorch: From DCGAN to ADAGAN
  • Comprehensive benchmark of GANs using CIFAR10 dataset
  • Better performance and lower memory consumption than original implementations
  • Providing pre-trained models that are fully compatible with up-to-date PyTorch environment
  • Support Multi-GPU(both DP and DDP), Mixed precision, Synchronized Batch Normalization, and Tensorboard Visualization