Skip to content

tensorlayer/adaptive-style-transfer

Repository files navigation

Adaptive Style Transfer in TensorFlow and TensorLayer

Update:

  • (15/05/2020) Migrated to TensorLayer2 (backend=TensorFlow 2.x). Original TL1 code can be found here.

This repository is implemented with TensorLayer2.0+.

Before "Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization", there were two main approaches for style transfer. First, given one content image and one style image, we randomly initialize a noise image and update it to get the output image. The drawback of this apporach is slow, it usually takes 3 mins to get one image. After that, academic proposed to train one model for one specific style, which input one image to network, and output one image. This approach is far more faster than the previous approach, and achieved real-time style transfer.

However, one model for one style still not good enough for production. If a mobile APP want to support 100 styles offline, it is impossible to store 100 models in the cell phone. Adaptive style transfer which in turn supports arbitrary styles in one single model !!! We don't need to train new model for new style. Just simply input one content image and one style image you want !!!

⚠️ ⚠️ This repo will be moved into here (please star) for life-cycle management soon. More cool Computer Vision applications such as pose estimation and style transfer can be found in this organization.

Usage

  1. Install TensorFlow and the master of TensorLayer:

    pip install git+https://github.com/tensorlayer/tensorlayer.git
    
  2. You can use the train.py script to train your own model. To train the model, you need to download MSCOCO dataset and Wikiart dataset, and put the dataset images under the 'dataset/content_samples' folder and 'dataset/style_samples' folder.

  3. You can then use the test.py script to run your trained model. Remember to put it into the 'pretrained_models' folder and rename it to 'dec_best_weights.h5'. A pretrained model can be downloaded from here, which is for TensorLayer v2 and a decoder using DeConv2d layers.

  4. You may compare this TL2 version with its precedent TL1 version branch to learn about how to migrate TL1 samples. There are also plenty of comments in code tagged with 'TL1to2:' for your reference.

Results

Here are some result images (Left to Right: Content , Style , Result):

Enjoy !

Discussion

License

  • This project is for academic use only.