Skip to content

parikshitkumar1/Image-to-Image-Translation-for-Anime-Characters

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


C2

Motivation

To implement a project based on Image-to-Image Translation with Conditional Adversarial Networks , by Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, Alexei A. Efros.

Citation

Image-to-Image Translation Using Conditional Adversarial Networks:

@article{pix2pix2017,
  title={Image-to-Image Translation with Conditional Adversarial Networks},
  author={Isola, Phillip and Zhu, Jun-Yan and Zhou, Tinghui and Efros, Alexei A},
  journal={CVPR},
  year={2017}
}

Requirements

Python 3.8 or above with all requirements dependencies installed. To install run:

$ pip3 install -r requirements.txt

make a folder called "WEIGHTS" in the utils folder and put the weights (https://drive.google.com/file/d/1hLM_ZHTzi7GsQtSL-1bvG5RT2XMMBamI/view?usp=sharing)

To run the app

$ streamlit run app.py

in case of albumentations error:

$ pip install -U git+https://github.com/albu/albumentations --no-cache-dir

Screenshots

Model Components and other details:

Generator: Unmodified UNET

Discriminator: Unmodified PatchGAN

read the paper for more details :p

Dataset:

modified version of https://www.kaggle.com/ktaebum/anime-sketch-colorization-pair by Taebum Kim

Hyperparameters:

LEARNING_RATE = 2e-4

BATCH_SIZE = 16

NUM_WORKERS = 2

IMAGE_SIZE = 256

CHANNELS_IMG = 3

L1_LAMBDA = 100

LAMBDA_GP = 10

NUM_EPOCHS = 7

final scores:

D_fake=0.179, D_real=0.859

Results

(256 x 256 x 3 output)

Might Do

  • Implement with different datasets
  • [ ]

About

Using Pix2Pix GAN for translating Anime images to something more aesthetic

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages