[NeurIPS 2022] Towards Robust Blind Face Restoration with Codebook Lookup Transformer
-
Updated
Jun 5, 2024 - Python
[NeurIPS 2022] Towards Robust Blind Face Restoration with Codebook Lookup Transformer
Implements VQGAN+CLIP for image and video generation, and style transfers, based on text and image prompts. Emphasis on ease-of-use, documentation, and smooth video creation.
Start here
[CVPR 2023] | RIDCP: Revitalizing Real Image Dehazing via High-Quality Codebook Priors
Pytorch implementation of Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors
Zero-Shot Text-to-Image Generation VQGAN+CLIP Dockerized
Streamlit Tutorial (ex: stock price dashboard, cartoon-stylegan, vqgan-clip, stylemixing, styleclip, sefa)
Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt
PyTorch codes for "Real-World Blind Super-Resolution via Feature Matching with Implicit High-Resolution Priors", ACM MM2022 (Oral)
Official Implement of Multi-Stage Multi-Codebook (MSMC) TTS
Fast and controllable text-to-image model.
NTIRE 2022 - Image Inpainting Challenge
Implementation of Binary Latent Diffusion
Traditional deepdream with VQGAN+CLIP and optical flow. Ready to use in Google Colab.
VQ-VAE/GAN implementation in pytorch-lightning
Text-to-Image Synthesis using Multimodal (VQGAN + CLIP) Architectures
Art generation using VQGAN + CLIP using docker containers. A simplified, updated, and expanded upon version of Kevin Costa's work. This project tries to make generating art as easy as possible for anyone with a GPU by providing a simple web UI.
Add a description, image, and links to the vqgan topic page so that developers can more easily learn about it.
To associate your repository with the vqgan topic, visit your repo's landing page and select "manage topics."