Skip to content

KumapowerLIU/DeFLOCNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

License CC BY-NC-SA 4.0 Python 3.6

DeFLOCNet: Deep Image Editing via Flexible Low level Controls (CVPR2021) .The official pytorch code.

Hongyu Liu, Ziyu Wan, Wei Huang, Yibing Song, Xintong Han , Jing Liao, Bin Jiang, Wei Liu.

edit_gif DeFLOCNet Show

DeFLOCNet Show

Installation

Clone this repo.

git clone https://github.com/KumapowerLIU/DeFLOCNet.git

Prerequisites

  • Python3
  • Pytorch >=1.0
  • Tensorboard
  • Torchvision
  • pillow

Demo

Please try our GUI demo!

You need download the pre-trained model to the checkpoints file, you need put the pre-trained model for Places2 to the checkpoints/nature and the pre-trained model for CelebA to the checkpoints/face. Then you can run the code: demo.py to edit the images. We give some example images in the folder face_sample and nature_sample respectively. Please see the gif to tell you how to use our GUI!

git clone https://github.com/KumapowerLIU/DeFLOCNet.git

Dataset Preparation

Original images: We use Places2, CelebA datasets. To train a model on the full dataset, download datasets from official websites.

Mask for original image: We use the irregular mask dataset Liu et al for the original image (not the color image ). You can download the publically available Irregular Mask Dataset from their website.

Color images for Places2: we use the RTV smooth method to extract the color for Places2. Run generation function data/matlab/generate_structre_images.m in your matlab. For example, if you want to generate smooth images for Places2, you can run the following code:

generate_structure_images("path to Places2 dataset root", "path to output folder");

Color images for face: We follow the SC-FEGAN to generate the color map for the face by using the median color of segmented areas. Sketch images: We follow the SC-FEGAN to predict the edge by using the HED edge detector.

Code Structure

  • scripts/train.py: the entry point for training.
  • scripts/test.py: the entry point for test.
  • model/DeFLOCNet.py: defines the loss, model, optimizetion, foward, backward and others.
  • model/network/structure_generation_block.py: defines the SGB block as mentioned in our paper.
  • config/: creates option lists using argparse package. More individuals are dynamically added in other files as well.
  • data/: process the dataset before passing to the network.

Pre-trained weights

There are two folders to present a pre-trained model for two datasets, respectively. How to use these pre-train models? Please see the Demo!

TODO

  • upload the training and testing scripts.

Citation

If you use this code for your research, please cite our papers.

@inproceedings{Liu2021DeFLOCNet},
  title={DeFLOCNet: Deep Image Editing via Flexible Low level Controls},
  author={Hongyu Liu, Ziyu Wan, Wei Huang, Yibing Song, Xintong Han, Jing Liao, Bing Jiang and Wei Liu},
  booktitle={CVPR},
  year={2021}
}

About

The official pytorch code of DeFLOCNet: Deep Image Editing via Flexible Low-level Controls (CVPR2021)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published