Skip to content

tan5o/anime2clothing

Repository files navigation

Anime2Clothing

report Open In Colab

Pytorch official implementation of Anime to Real Clothing: Cosplay Costume Generation via Image-to-Image Translation. (https://arxiv.org/abs/2008.11479)

Prerequisites

  • Anaconda 3
  • Python 3
  • CPU or NVIDIA GPU + CUDA CuDNN

Getting Started

Training

python train.py --project_name cosplay_synthesis --dataset DATASET

Training dataset structure

DATASET
├── train
│    ├── a
│    │   ├── 0.png
│    │   ├── 1.png
│    │      ︙
│    |   └── n.png
│    └── b
│        ├── 0.png
│        ├── 1.png
│           ︙
│        └── n.png
└── test
     ├── a
     │   ├── 0.png
     │   ├── 1.png
     │      ︙
     |   └── n.png
     └── b
         ├── 0.png
         ├── 1.png
            ︙
         └── n.png

Continue Training

Add continue_train option, and you can control starting epoch and resolution. Basically, model load from latest checkpoints. However, you can choose number of epoch if you use --load_epoch option.

--continue_train --start_epoch 47 --start_resolution 256

Testing

python test.py --model_path models/pretrained_model.pth --input_dir dataset/test/a --output_dir result

Pre-trained model

You can download pre-trained model from models/pretrained_unet_20200122.pth

We recommend using an anime character image with a simple background as the input image.

Acknowledgments

Our code is inspired by pix2pix and pix2pixHD

About

Pytorch official implementation of Anime to Real Clothing: Cosplay Costume Generation via Image-to-Image Translation.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published