Skip to content

VITA-Group/USAID

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Segmentation-aware Image Denoising Without Knowing True Segmentation

Implement of the paper:

Segmentation-aware Image Denoising Without Knowing True Segmentation

Sicheng Wang, Bihan Wen, Junru Wu, Dacheng Tao, Zhangyang Wang

Overview

we propose a segmentation-aware image denoising model dubbed U-SAID, which does not need any ground-truth segmentation map in training, and thus can be applied to any image dataset directly. We demonstrate the U-SAID generates denoised image has:

  • better visual quality;
  • stronger robustness for subsequent semantic segmentation tasks.

We also manifest U-SAID's superior generalizability in three folds:

  • denoising unseen types of images;
  • pre-processing unseen noisy images for segmentation;
  • pre-processing unseen images for unseen high-level tasks.

Methods

U-SAID: Network architecture. The USA module is composed of a feature embedding sub-network for transforming the denoised image to a feature space, followed by an unsupervised segmentation sub-network that projects the feature to a segmentation map and calculates its pixel-wise uncertainty.

Visual Examples

Visual comparison on Kodak Images

Semantic segmentation from Pascal VOC 2012 validation set

How to run

Dependences

Train

USAID_train.py

Saved Models

Saved_Models/USAID.pth

Citation

If you use this code for your research, please cite our paper.

@misc{1905.08965,
Author = {Sicheng Wang and Bihan Wen and Junru Wu and Dacheng Tao and Zhangyang Wang},
Title = {Segmentation-Aware Image Denoising without Knowing True Segmentation},
Year = {2019},
Eprint = {arXiv:1905.08965},
}

Releases

No releases published

Packages

No packages published

Languages