Skip to content

vijayvee/text-to-image-synthesis

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Generative Adversarial Text to Image Synthesis

In this project, I have designed and trained a model to take natural language captions in english and generate images relevant to the image captions.

I have used skip-thoughts to encode the input caption before feeding to the generator and to the discriminator.

The method used is inspired by Generative Adversarial Text to Image Synthesis.

Prerequisites

In order to clone and reproduce results from this repository, you will need to install:

  • Python 2.7
  • Numpy
  • Tensorflow
  • Theano (For using skip-thoughts)
  • OpenCV

Contributing

Feel free to clone and add extensions. Pull Requests are welcome.

Authors

Acknowledgments

Sample output

Included below are a few samples from our experiments:

Input caption Generated image
these flowers have an open face with many pale pink petals. Image 1
this flower is red in color, and has petals that are closely wrapped around the center. Image 1
this is a very cool flower with very bold white on the petals and a very unique middle. Image 1
multiple layers of reddish-yellow petals that decrease in size as the are closer to the top of the flower. Image 1
petals are light purple in color with longer stamens and a prominent green pistil. Image 1

About

Project to transform a natural language description into an image using Generative Adversarial Networks.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%