Skip to content

Latest commit

 

History

History
74 lines (53 loc) · 2.66 KB

starganv2.md

File metadata and controls

74 lines (53 loc) · 2.66 KB

StarGAN V2

1 Introduction

StarGAN V2is an image-to-image translation model published on CVPR2020. A good image-to-image translation model should learn a mapping between different visual domains while satisfying the following properties: 1) diversity of generated images and 2) scalability over multiple domains. Existing methods address either of the issues, having limited diversity or multiple models for all domains. StarGAN v2 is a single framework that tackles both and shows significantly improved results over the baselines. Experiments on CelebA-HQ and a new animal faces dataset (AFHQ) validate superiority of StarGAN v2 in terms of visual quality, diversity, and scalability.

2 How to use

2.1 Prepare dataset

The CelebAHQ dataset used by StarGAN V2 can be downloaded from here, and the AFHQ dataset can be downloaded from here. Then unzip dataset to the PaddleGAN/data directory.

The structure of dataset is as following:

  ├── data
      ├── afhq
      |   ├── train
      |   |   ├── cat
      |   |   ├── dog
      |   |   └── wild
      |   └── val
      |       ├── cat
      |       ├── dog
      |       └── wild
      └── celeba_hq
          ├── train
          |   ├── female
          |   └── male
          └── val
              ├── female
              └── male

2.2 Train/Test

The example uses the AFHQ dataset as an example. If you want to use the CelebAHQ dataset, you can change the config file.

train model:

   python -u tools/main.py --config-file configs/starganv2_afhq.yaml

test model:

   python tools/main.py --config-file configs/starganv2_afhq.yaml --evaluate-only --load ${PATH_OF_WEIGHT}

3 Results

4 Model Download

模型 数据集 下载地址
starganv2_afhq AFHQ starganv2_afhq

References

    1. StarGAN v2: Diverse Image Synthesis for Multiple Domains
    @inproceedings{choi2020starganv2,
    title={StarGAN v2: Diverse Image Synthesis for Multiple Domains},
    author={Yunjey Choi and Youngjung Uh and Jaejun Yoo and Jung-Woo Ha},
    booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
    year={2020}
    }