Skip to content
This repository has been archived by the owner on Jun 23, 2023. It is now read-only.

thinkingmachines/christmAIs

Repository files navigation

christmAIs

cloud build status Documentation Status License: GPL v2 python 3.6+

christmAIs ("krees-ma-ees") is text-to-abstract art generation for the holidays!

This work converts any input string into an abstract art by:

This results to images that look like these:

alt text alt text alt text alt text

Setup and Installation

Please see requirements.txt and requirements-dev.txt for all Python-related dependencies. Notable dependencies include:

  • numpy==1.14.2
  • scikit_learn==0.20.0
  • Pillow==5.3.0
  • matplotlib==2.1.0
  • tensorflow
  • gensim
  • magenta

The build steps (what we're using to do automated builds in the cloud) can be seen in the Dockerfile. For local development, it is recommended to setup a virtual environment. To do that, simply run the following commands:

git clone git@github.com:thinkingmachines/christmAIs.git
cd christmAIs
make venv

Automated Install

We created an automated install script to perform a one-click setup in your workspace. To run the script, execute the following command:

source venv/bin/activate  # Highly recommended
./install-christmais.sh

This will first install magenta and its dependencies, download file dependencies (categories.txt, model.ckpt, and chromedriver), then clone and install this package.

Manual Install

For manual installation, please follow the instructions below:

Installing magenta

The style transfer capabilities are dependent on the magenta package. As of now, magenta is only supported in Linux and Mac OS. To install magenta, you can perform the automated install or do the following steps:

# Install OS dependencies
apt-get update && \
apt-get install -y build-essential libasound2-dev libjack-dev

# Install magenta
venv/bin/pip install magenta

Installing everything else

You can then install the remaining dependencies in requirements.txt. Assuming that you have create a virtual environment via make venv, we recommend that you simply run the following command:

make build # or `make dev`

This will also download (via wget) the following files:

  • categories.txt (683 B): contains the list of Quick, Draw! categories to compare a string upon (will be saved at ./categories/categories.txt).
  • arbitrary_style_transfer.tar.gz (606.20 MB): contains the model checkpoint for style transfer (will be saved at ./ckpt/model.ckpt).
  • chromedriver (5.09 MB): contains the web driver for accessing the HTML output for Sketch-RNN (will be saved at ./webdriver/chromedriver).

Generating the documentation

Ensure that you have all dev dependencies installed:

git clone git@github.com:thinkingmachines/christmAIs.git
make venv
make dev

Then to build the actual docs

cd christmAIs/docs/
make html

This will generate an index.html file that you can view in your browser.

Usage

We have provided a script, christmais_time.py to easily generate your stylized Quick, Draw! images. In order to use it, simply run the following command:

python -m christmais.tasks.christmais_time     \
    --input=<Input string to draw from>        \
    --style=<Path to style image>              \
    --output=<Unique name of output file>      \
    --model-path=<Path to model.ckpt>          \
    --categories-path=<Path to categories.txt> \
    --webdriver-path=<Path to webdriver>

If you followed the setup instructions above, then the default values for the paths should suffice, you only need to supply --input, --style, and --output.

As an example, let's say I want to use the string Thinking Machines as our basis with the style of Ang Kiukok's Fishermen (ang_kiukok.jpg), then, my command will look like this:

python -m christmais.tasks.christmais_time \
    --input="Thinking Machines"            \
    --style=./path/to/ang_kiukok.png       \
    --output=tmds-output

This will then generate the output image in ./artifacts/:

alt text

References

  • Pennington, Jeffrey, Socher, Richard, et al. (2014). “Glove: Global Vectors for Word Representation”. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532-1543.
  • Ha, David and Eck, Douglas (2017). “A Neural Representation of Sketch Drawings”. In: arXiv.:1704.03477.
  • Ghiasi, Golnaz et al. (2017). “Exploring the structure of real-time, arbitrary neural artistic stylization network”. In: arxiv:1705.06830.
  • Magenta demonstration (sketch-rnn.js):https://github.com/hardmaru/magenta-demos/tree/master/sketch-rnn-js