Skip to content

Official implementation of "Foreground-Aware Stylization and Consensus Pseudo-Labeling for Domain Adaptation of First-Person Hand Segmentation" (IEEE Access 2021)

ut-vision/FgSty-CPL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FgSty + CPL

This repository contains the code of our foreground-aware stylization (FgSty) and consensus pseudo-labeling (CPL), and the synthesized dataset used in our experiments, ObMan-Ego (see DATASET.md). If you have some requests or questions, please contact the first author.

Paper

Foreground-Aware Stylization and Consensus Pseudo-Labeling for Domain Adaptation of First-Person Hand Segmentation
Takehiko Ohkawa, Takuma Yagi, Atsushi Hashimoto, Yoshitaka Ushiku, and Yoichi Sato
IEEE Access, 2021
Project page: https://tkhkaeio.github.io/projects/21_FgSty-CPL/

Requirements

Python 3.7
PyTorch 1.6.0

Data directory structure should be

- root / source-dataset (e.g., EGTEA, Ego2Hands, ObMan-Ego)
    - train
    - trainannot (segmentation mask)
    - test
    - testannot (segmentation mask)
- root / target-datasets (e.g., GTEA, EDSH12, EDSH1K, UTG, YHG)
    - train
    - trainannot (segmentation mask)
    - test
    - testannot (segmentation mask)

Stylization

  1. Please download pretrained models of PhotoWCT from [here] and set them to FgSty/pretrained_models.

For stylizing in an instance,

  1. cd FSty and run
python test.py --model /path/to/your/model 
               --content_image_path /path/to/your/content-image
               --content_seg_path /path/to/your/content-mask
               --style_image_path /path/to/your/style-image 
               --style_seg_path /path/to/your/style-mask
               --output_image_path /path/to/your/output

(If you stylize the foreground only, see FgSty-Only.md.)

For stylizing in batches,

  1. cd FgSty and specify your data root directory in make_script_rand.py.

  2. Run python make_script_rand.py to create files with arguments for stylization.

  3. Run ./scripts/EGTEA_v1_test_part00x.sh

Training & Adaptation

  1. Please download pretrained models of RefineNet from [here] and set them to CPL/pretrained_models.

  2. cd CPL and run

python train_refinenet.py --dataset /path/to/your/dataset

for the naive training on a single dataset, or run

python train_refinenet_CPL.py --dataset /path/to/your/style-adapted-dataset \
                              --src_dataset /path/to/your/source-dataset \
                              --trg_dataset /path/to/your/target-dataset \
                              --src_model_path /path/to/your/pretrained-source-model \
                              --eta 0

for the adaptation training based on the consensus scheme without adversarial adaptation.
Note: the training of CPL requires to use two GPUs.

Evaluation

  1. cd CPL and specify your data root directory in test_refinenet.py.

  2. Run

python test_refinenet.py --dataset /path/to/your/target-dataset \
                         --model_path /path/to/your/test-model 

References

FastPhotoStyle: https://github.com/NVIDIA/FastPhotoStyle
RefineNet: https://github.com/DrSleep/refinenet-pytorch
UMA: https://github.com/cai-mj/UMA

About

Official implementation of "Foreground-Aware Stylization and Consensus Pseudo-Labeling for Domain Adaptation of First-Person Hand Segmentation" (IEEE Access 2021)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages