The StyleGAN was first used to generate images from a given domain. This was followed by its usage for projecting real faces into a given space.
Original Image | Projected Image |
The projected images can then be used for a myriad of different use cases.
One can combine the features of two images were combinable using the StyleGAN.
Image 1 | Image 2 | Style Mixed image with Z_low = 1 and Z_high = 8 |
Another cool quirk of these projected images was that individual characteristics of the subject in the image were now tweakable.
Original Image | Age = +2, Gender = +4 |
In 2020, NVIDIA released another model called StyleGAN-NADA that was capable of transfering different images to different domains. Additionally, these results worked on top of all the features already existing in StyleGANs.
Original Image | Anime |
Age = +6 | Anime, Age = +6 |
In 2021, Tencent Applied Research Center developed a model called GFP-GAN, which was capable of improving image quality and resolution, specifically of images of faces.
Original Image | Restored Image |
The underlying model of image generators took a quick shift from GANs to transformers with the increasing popularity of transformer-based language models in recent years. One such algorithm that has shown impressive results is Stable Diffusion. Not only was it capable of generating images based on text prompts, but it was also capable of altering existing images using these prompts.
Seasons
Original Image | Seattle in Winter | Seattle in summer | Seattle in spring |