Skip to content

HanXuMartin/Color-Invariant-Skin-Segmentation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

88 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Color Invariant Skin Segmentation

This is the implementation of the paper Color Invariant Skin Segmentation using FCN [1] and U-Net [2].

Color Space Augmentation

Images will be augmentated in HSV color space. We change the HSV values of images and enlarge the training set. We used the parameters shown in the image below.

color augmentation

Here is the pipeline using color space augmentation for skin segmentation.

pipeline

Here is one example on how to change the HSV value.

Output

Color space augmentation can help skin segmentation models deal with complex illuminatlion conditions. Below is some examples. Label A means model was trained after (or with) color space augmentation while label B means before (or without) color space augmentation. examples

How to use

Requirements

  • Python 3.8.5
  • PyTorch
  • Tensorflow
  • Keras

Test

  1. Clone the repo:
git clone https://github.com/HanXuMartin/Color-Invariant-Skin-Segmentation.git
  1. cd to the local path
cd Color-Invariant-Skin-Segmentation
  1. Download the models from here

  2. For U-Net: Change the model path, testing path and output path in the test.py and then run test.py.

cd U-Net
python test.py
  1. For FCN: Change the model path, testing path and output path in the prediction.py and then run prediction.py.
cd FCN
python prediction.py

Train models with your own dataset

We trained our models using ECU [3] dataset. Following the steps if you want to train your own models. We suggest the groundtruth to be in the image format (like jpg, png)

Dataset organization

Origanize your dataset as follows:

dataset
|-----train
        |-----image
        |-----mask
|-----validation
        |-----image
        |-----mask

U-Net training

  1. Open the U-Net/train.py and change the parameters in the "Training setup" section
  2. Run the train.py file
python train.py

Checkpoints will be saved in the U-Net/checkpoints as default.

FCN training

  1. Open the data_train.py/data_val.py and change the training/validation path. Remeber to change the format of the masks names in the line
imgB = cv2.imread('your mask')
  1. Open the FCN.py and change the training parameters and the saving path of checkpoints. Then run FCN.py.
python FCN.py

Checkpoints will be saved in the FCN/checkpoints as default.

Reference

[1]. https://github.com/yunlongdong/FCN-pytorch

[2]. https://github.com/zhixuhao/unet

[3]. S. L. Phung, A. Bouzerdoum and D. Chai, "Skin segmentation using color pixel classification: analysis and comparison," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 1, pp. 148-154, Jan. 2005, doi: 10.1109/TPAMI.2005.17.

[4]. M. Wang, W. Deng, J. Hu, X. Tao and Y. Huang, "Racial Faces in the Wild: Reducing Racial Bias by Information Maximization Adaptation Network," 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 692-702, doi: 10.1109/ICCV.2019.00078.

Cite this repo

If you find this repo useful, please consider citing it as following:

@InProceedings{Xu_2022_CVPR,
    author    = {Xu, Han and Sarkar, Abhijit and Abbott, A. Lynn},
    title     = {Color Invariant Skin Segmentation},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
    month     = {June},
    year      = {2022},
    pages     = {2906-2915}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages