Skip to content

In this repository, we focus on the compression of images and video (sequence of image frames) using deep generative models and show that they achieve a better performance in compression ratio and perceptual quality. We explore the use of VAEGANs for this task.

License

vineeths96/Variational-Generative-Image-Compression

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Language Contributors Forks Stargazers Issues MIT License LinkedIn


Logo

Variational Generative Image Compression

Image Compression using VAEGANs
Explore the repository»
View Report

tags : image compression, vaegans, gans, generative networks, flickr30k, deep learning, pytorch

About The Project

In the modern world, a huge amount of data is being sent across the internet every day. Efficient data compression and communication protocols are of great research interest today. We focus on the compression of images and video (sequence of image frames) using deep generative models and show that they achieve a better performance in compression ratio and perceptual quality. We explore the use of VAEGANs for this task. We know that generative models such as GANs and VAE can reproduce an image from its latent vector. We ask the question whether we can go the other direction — from an image to a latent vector. Research, though limited, has shown that these types of methods are quite effective and efficient. A detailed description of algorithms and analysis of the results are available in the Report.

Built With

This project was built with

  • python v3.8.5
  • PyTorch v1.7
  • The environment used for developing this project is available at environment.yml.

Getting Started

Clone the repository into a local machine and enter the src directory using

git clone https://github.com/vineeths96/Variational-Generative-Image-Compression
cd Variational-Generative-Image-Compression/src

Prerequisites

Create a new conda environment and install all the libraries by running the following command

conda env create -f environment.yml

The dataset used in this project (Flickr30K) will be automatically downloaded and setup in input directory during execution.

Instructions to run

To train the model with n channels run,

python main.py --train-model True --num-channels <n> 

This trains the VAEGAN model and saves it in the models directory.

To evaluate the model on the compressed images run,

python main.py 

This generates a folder in the results directory for each run. The generated folder contains the compressed images using different number of channels. This also calculates the average PSNR and SSIM values across different runs, and generates avg_psnr.txt and avg_ssim.txt in the results directory.

Model overview

The architecture of the model is shown below. We freeze the GAN model and optimize for the best latent vector using gradient descent.

Transformer

Results

We evaluate the models on the Structural Similarity Index (SSIM) and Peak Signal to Noise Ratio (PSNR) between the original image and reconstructed image. More detailed results and inferences are available in report here.

Number of Channel SSIM PSNR CR
28 Channels 0.83 24.79 1.74 ×
16 Channels 0.80 24.37 3.06 ×
8 Channels 0.81 24.06 6.12 ×
4 Channels 0.79 23.57 12.24 ×
2 Channels 0.71 21.57 24.49 ×

Figure below compares the image quality of reconstruction for a sample image for different schemes.

Transformer

License

Distributed under the MIT License. See LICENSE for more information.

Contact

Vineeth S - vs96codes@gmail.com

Project Link: https://github.com/vineeths96/Variational-Generative-Image-Compression

About

In this repository, we focus on the compression of images and video (sequence of image frames) using deep generative models and show that they achieve a better performance in compression ratio and perceptual quality. We explore the use of VAEGANs for this task.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages