Skip to content

Implementation of Denoising Diffusion Probabilistic Model from Scratch for Image Generation Task in PyTorch

License

Notifications You must be signed in to change notification settings

fork123aniket/Denoising-Diffusion-Probabilistic-Model-from-Scratch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Denoising Diffusion Probabilistic Model from Scratch

This repository implements a fast, yet simple version of the Denoising Diffusion Probabilistic Models paper for the image generation task. Denoising score matching technique is used to allow the network for rapid estimation of data distribution’s gradient. Moreover, the Langevin sampling method is also performed to generate images from their corresponding true data distribution. This implementation provides unconditional along with conditional (Classifier-free Diffusion Guidance (CFG) and Exponential Moving Average (EMA)) sampling approaches.

Requirements

  • PyTorch
  • torchvision
  • numpy
  • PIL
  • matplotlib
  • logging
  • tqdm

Usage

Data

The unconditional model is trained on Landscape Pictures dataset and the conditional model is trained on CIFAR-10 dataset.

Training and Testing

  • To see the implementation of conditional and unconditional sampling methods, check DDPM.py.
  • All the network architectures for EMA, UNet, etc. can be found in the models.py file.
  • To train DDPMs for either of the sampling approaches (conditional and unconditional) and generate the images, run DDPM.py.
  • All hyperparameters to control the training and testing phases of the model are provided in the given DDPM.py file.

Results

The images generated by both conditional and unconditional models can be seen below:-

Training Dataset Sampling Type Generated Images
Landscape Pictures unconditional alt text
CIFAR-10 conditional (on Dog and Deer classes) alt text alt text

Releases

No releases published

Packages

No packages published

Languages