Skip to content

mbreuss/consistency_models_toy_task

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Consistency Models 1D Toy Tasks

Minimal unofficial implementation of consistency models (CM) proposed by Song et al. 2023 on 1D toy tasks.


Installation

pip install -e .

Consistency Model Training

This repo contains implementations for Consistency Distillation (CD) and Consistency Training (CT). For better performance on Consistenct Training, there exists an option to pretrain the model with an diffusion objective before changing to the CT objective. The usage of a diffusion training objective before starting the CT helps to stabilize the training process significantly.

To try it out:

  • Consistency Distillation: cd_main.py

  • Discrete Consisteny Training: cm_main.py

  • Continuous Consisteny Training: ct_cm_main.py

Data

I have implemented some simple 1D toy tasks to try out the capabilities of multimodality and expressiveness of consistency models. Just change the input string for the datamanager class to one of the following datasets 'three_gmm_1D', 'uneven_two_gmm_1D', 'two_gmm_1D', 'single_gaussian_1D'.


Visualization of the Results

After the training results of the trained model will be plottted. The plots look like the example ones below.

Some results

I used 2000 training steps for these results with diffusion pretraining.

Two Gaussians

From left to right: EDM Diffusion pretraining with Euler, Multistep prediction with Consisteny Models and Single Step prediction.

Three Gaussians

From left to right: EDM Diffusion pretraining with Euler, Multistep prediction with Consisteny Models and Single Step prediction.


Lessons learned

  • Consistency training is not really stable, which is not surprising, since the authors also discuss its shortcomings in the paper and even recommend to use pretrained diffusion models as initialization for the training

  • Image hyperparamters do not translate well to other domains. I hat limited sucess with the recommened parameters for Cifar10 and other image-based applications. By significanlty reducing the maximimum noise level results improved. I also increased the minimum number of discrete noise levels and the maximum.

  • Multistep prediction of consistency models has a certain drift towards to outside, which I cannot explain. I just used linear noise scheduler for the multistep sampling, so maybe with better discretization results will improve

  • Discrete training works a lot better than the continuous version. The authors report similar observations for the high-dimensional image domain.

To Dos

  • Implement Consistency Distillation Training
  • Add new toy tasks
  • Check conditional training
  • Find good hyperaparmeters
  • Improve plotting method

Acknowledgement

Citation

@article{song2023consistency,
  title={Consistency Models},
  author={Song, Yang and Dhariwal, Prafulla and Chen, Mark and Sutskever, Ilya},
  journal={arXiv preprint arXiv:2303.01469},
  year={2023},
}

About

Unofficial minimal implementation of consistency models (CM) proposed by Song et al. 2023 on a 1D toy task in pytorch

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages