Skip to content

Neural style transfer using RNN. Final project for art course!

Notifications You must be signed in to change notification settings

milescb/NeuralStyleTransfer

Repository files navigation

Neural Style Transfer

Here, we follow the implementation given in this Tensorflow example. We had to edit this code in order to run properly following, to some extent, the implementation given here. The main idea is to use a convolution neural network (CNN) to map the style from one picture onto the other. This is made possible with the pre-trained network VGG19 in Tensorflow. This network takes as an input the pixels and RBG colors for each pixel of both an original image and a style image, thus transfering the style onto the original photo. Note that I did not write the majority of the functions in StyleTransfer.py but rather corrected several bugs here and collected the code into an easily useable format. See below for the methods implemented for running the code.

Subject: the City of Berlin, Styles: various from German 20th centrury art

This project is done specifically for an art class I took in Berlin while studying abroad. In the class, we discussed art styles and movements from the 20th centry in Germany. My final project was focused on using these art styles and applying them to pictures I took around Berlin.

Subject Photos

I choose as my subject matter the following photos of iconic Berlin scenes:

  1. Haus das Lehers mit Fernsehturm in Hintergrund

  1. Brandenburger Tor und Pariser Platz

  1. Pieta in Neuewache

Photos for Style

I choose the following photos to apply styles

  1. Deutsche Expressionismus: Marianne von Werefkin. Die rote Stadt, Tempera auf Pappe, 1902.

  1. Suprematismus von Kasimir Sewerinowitsch Malewitsch

  1. Sascha Wiederhold Gemalde, Figuren im Raum, 1928, Öl auf Karton auf Leinwand.

Results of Style Transfer

The results are the style transfer are shown below for each photo and each style. Styles are arranged in the order above.

All the above photos are created with a learning rate of 10.0 over 500 epochs of training. This takes approximately 17 minutes to complete on a M1 mac with 16GB of RAM.

Running the code

All required files to run can be found in the requirements.txt file. To run the code, create a virtual environment and install the packages via

python3 -m venv <path_to_new_env>
source <path_to_new_env>/bin/activate
pip install -r requirements.txt

The code then runs out of the box with the following commands:

python3 perform_transfer.py

Possible command-line arguments are

python3 perform_transfer.py --nEpochs=100 --learning_rate=10.0 --content_path=<path_to_content> --style_path=<path_to_styles> --save_folder=<path_to_save_folder> --display_num=10

Preparing Photos

Both style and content photos must be the same dimension in order to run. In order to not consume too much memory / time in training, photos should be 500x500 pixels or smaller. After preparing photos as such, put them in appropriate folders and run the code!