Skip to content

LouieYang/stroke-controllable-fast-style-transfer

Repository files navigation

Stroke Controllable Fast Style Transfer

This repository contains the public release of the Python implementation of

Stroke Controllable Fast Style Transfer with Adaptive Receptive Fields [arXiv]

Yongcheng Jing*, Yang Liu*, Yezhou Yang, Zunlei Feng, Yizhou Yu, Dacheng Tao, Mingli Song

If you use this code or find this work useful for your research, please cite:

@inproceedings{jing2018stroke,
  title={Stroke Controllable Fast Style Transfer with Adaptive Receptive Fields},
  author={Jing, Yongcheng and Liu, Yang and Yang, Yezhou and Feng, Zunlei and Yu, Yizhou and Tao, Dacheng and Song, Mingli},
  booktitle={European conference on computer vision},
  year={2018}
}

Please also consider citing our another work:

@article{jing2017neural,
  title={Neural Style Transfer: A Review},
  author={Jing, Yongcheng and Yang, Yezhou and Feng, Zunlei and Ye, Jingwen and Yu, Yizhou and Song, Mingli},
  journal={arXiv preprint arXiv:1705.04058},
  year={2017}
}

Getting Started

Implemented and tested on Ubuntu 14.04 with Python 2.7 and Tensorflow 1.4.1.

Dependencies

Download pre-trained VGG-19 model

The VGG-19 model of tensorflow is adopted from VGG Tensorflow with few modifications on the class interface. The VGG-19 model weights is stored as .npy file and could be download from Google Drive or BaiduYun Pan. After downloading, copy the weight file to the /vgg19 directory

Basic Usage

Train the network

Use train.py to train a new stroke controllable style transfer network. Run python train.py -h to view all the possible parameters. The dataset used for training is MSCOCO train 2014 and could be download from here, or you can use a random selected 2k images from MSCOCO (download from here) for quick setup. Example usage:

$ python train.py \
    --style /path/to/style_image.jpg \
    --train_path /path/to/MSCOCO_dataset \
    --sample_path /path/to/content_image.jpg

Freeze model

Use pack_model.py to freeze the saved checkpoint. Run python pack_model.py -h to view all parameter. Example usage:

$ python pack_model.py \
    --checkpoint_dir ./examples/checkpoint/some_style \
    --output ./examples/model/some_style.pb

We also provide some pre-trained style model for fast forwarding, which is stored under ./examples/model/pre-trained/.

Inference

Use inference_style_transfer.py to inference the content image based on the freezed style model. Set --interp N to enable interpolation inference where N is the number of the continuous stroke results.

$ python inference_style_transfer.py \
    --model ./examples/model/some_style.pb \
    --serial ./examples/serial/default/ \
    --content ./examples/content/some_content.jpg

For CPU-users, please set os.environ["CUDA_VISIBLE_DEVICES"]="" in the source code.

Examples

Discrete Stroke Size Control

From left to right are content, style, 256-stroke-size result, 512-stroke-size result, 768-stroke-size result.





Spatial Stroke Size Control

From left to right are content&style, mask, same stroke size across image result and spatial stroke size control result.







Continuous Stroke Size Control

Stroke grows from left to right. We zoom in on the same region (red frame) to observe the variations of stroke sizes








License

© Alibaba-Zhejiang University Joint Research Institute of Frontier Technologies, 2018. For academic and non-commercial use only.

Contact

Feel free to contact us if there is any question (Yang Liu lyng_95@zju.edu.cn)

Releases

No releases published

Packages

No packages published

Languages