Experiments with Baudelaire and a text-to-image GAN.
-
Updated
Oct 12, 2021 - HTML
Experiments with Baudelaire and a text-to-image GAN.
creates item images utilizing procgen and neural networks
Multi-Modal Image Generation for News Stories
COMP4971C Independent Study Project Repository.
VQGAN and CLIP are actually two separate machine learning algorithms that can be used together to generate images based on a text prompt. VQGAN is a generative adversarial neural network that is good at generating images that look similar to others (but not from a prompt), and CLIP is another neural network that is able to determine how well a c…
Translation of speech to image directly without text is an interesting and useful topic due to the potential application in computer-aided design, human to computer interaction, creation of an art form, etc. So we have focused on developing Deep learning and GANs based model which will take speech as an input from the user, analyze the emotions …
yet another VQGAN-CLIP variation
machines vivid dreams
AI-powered art generator based on VQGAN+CLIP
VQGAN+CLIP implementation for aarch64 architecture testing and benchmarking with machine learning workloads
creates ability icon images utilizing procgen and neural networks
The purpose of the project is to understand a basic GAN from scratch. A WGAN was built to generate people's faces based in the Celeba Dataset. VQGAN + CLIP model was used to generate unique designs that would be used in fashion.
ArtAI is an interactive art installation that collects people's ideas in real-time from social media and uses deep learning and AI art generation to curate these ideas into a dynamic display.
Mozart - A Generative Art Platform
Art generation using VQGAN + CLIP using docker containers. A simplified, updated, and expanded upon version of Kevin Costa's work. This project tries to make generating art as easy as possible for anyone with a GPU by providing a simple web UI.
Video generator using CLIP+VQGAN and sdvm
A simple library that implements CLIP guided loss in PyTorch.
Traditional deepdream with VQGAN+CLIP and optical flow. Ready to use in Google Colab.
Add a description, image, and links to the vqgan-clip topic page so that developers can more easily learn about it.
To associate your repository with the vqgan-clip topic, visit your repo's landing page and select "manage topics."