Skip to content

HilaManor/GaussianDenoisingPosterior

Repository files navigation

Python 3.8.10 NumPy Matplotlib Notebook torch torchvision

On the Posterior Distribution in Denoising: Application to Uncertainty Quantification Official Implementation [ICLR 2024]

Hila Manor and Tomer Michaeli
Technion - Israel Institute of Technology

This repository contains the code release for On the Posterior Distribution in Denoising: Application to Uncertainty Quantification Official Implementation.

Animations_Intro.mp4

Table of Contents

Requirements

python -m pip install -r requirements.txt

Note that for DDPM (faces), their code uses Open-MPI which sometimes have problems installing on machines with conda installed.

Pre-trained Models

We support a number of pre-trained models, and one can clone their repo and download their checkpoints as necessary.

MNIST

As this is a simple network we built and trained, the checkpoint is already included in the repo.

KAIR

This repo contains the implementations for multiple denoisers that we support: DnCNN, IRCNN, SwinIR.

  1. Clone their repo (provide this path to the --model_zoo parameter)

    git clone https://github.com/cszn/KAIR.git 
  2. Follow the instructions in KAIR/model_zoo/README.md to download the wanted checkpoints or here for SwinIR.

    • We used colorDN_DFWB_s128w8_SwinIR-M models, but the interface should be able to use most versions)

Noise2Void

The original official implementation of Noise2Void is not in python. Nevertheless, the authors later published Probabilistic Noise2Void, and with its (python) GitHub implementation also included the python version of N2V.

This version of the code needs some fixes in their original code, and therefore is provided in the repo. Our trained checkpoint over the FMD data and the training notebooks are also included in the local pn2v repo.

  1. Run the GetData notebook to download the FMD dataset, extract the images and preprocess them for n2v.

DDPM (faces)

We use Label-Efficient Semantic Segmentation with Diffusion Models's checkpoint for DDPM trained on the entire FFHQ dataset, and tested on celebA (as is usually done with the faces domain and diffusion models).

The relevant version of guided_diffusion used is already included in this repo, and therefore:

  1. Follow their download_checkpoint.sh to download ffhq.pt, and place it in DDPM_FFHQ.
  2. If needed, follow their download_datasets.sh to download celebA images.

Usage Example

python main.py -e <number of eigenvectors> -p <context size around the patch> -t <subspace iters> -c <small constant> -o <output folder> -d <denoiser model> -i <input image path>

Use --help for more information on the parameters and other options, such as low_acc for finding EVs only quickly (without calculating the moments for the marginal distribution), or use_poly to try and fit a polynomial for the moments calculation.

Use -v to calculate the higher-order moments and estimate the density along the PCs.

More Examples

FacesAnimations.mp4

If you use this code for your research, please cite our paper:

@inproceedings{
    manor2024posterior,
    title={On the Posterior Distribution in Denoising: Application to Uncertainty Quantification},
    author={Hila Manor and Tomer Michaeli},
    booktitle={The Twelfth International Conference on Learning Representations},
    year={2024},
    url={https://openreview.net/forum?id=adSGeugiuj}
}

About

We derive a fundamental property of the posterior distribution in Gaussian denoising, and use it to propose a new way for uncertainty visualization, which requires no training or fine-tuning.

Topics

Resources

License

Stars

Watchers

Forks