Skip to content

SaoYan/IPMI2019-AttnMel

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Melanoma Recognition via Visual Attention


Updates:

  • Jan 2020: The code is upgraded to support PyTorch >= 1.1.0. If you are using an older version, be sure to read the following warnings.

WARNING If you are using PyTorch < 1.1.0, pay attention to the following two points.

Prior to PyTorch 1.1.0, the learning rate scheduler was expected to be called before the optimizer’s update; 1.1.0 changed this behavior in a BC-breaking way. If you use the learning rate scheduler (calling scheduler.step()) before the optimizer’s update (calling optimizer.step()), this will skip the first value of the learning rate schedule. If you are unable to reproduce results after upgrading to PyTorch 1.1.0, please check if you are calling scheduler.step() at the wrong time.

  • Prior to 1.1, Tensorboard is not natively supported in PyTorch. An alternative is to use tensorboardX. Good news is the APIs are the same (in fact torch.utils.tensorboard === tensorboardX).

If you use the code for your own reasearch, please cite the following paper :)

@inproceedings{yan2019melanoma,
title={Melanoma Recognition via Visual Attention},
author={Yan, Yiqi and Kawahara, Jeremy and Hamarneh, Ghassan},
booktitle={International Conference on Information Processing in Medical Imaging},
pages={793--804},
year={2019},
organization={Springer}
}

Project webpage

network

visualization

Pre-traind models

Google drive link

How to run

1. Dependences

  • PyTorch >= 1.1.0
  • torchvision
  • scikit-learn

2. Data preparation

ISIC 2016 download here; organize the data as follows

  • data_2016/
    • Train/
      • benign/
      • malignant/
    • Test/
      • benign/
      • malignant/

ISIC 2017 download here; organize the data as follows

  • data_2017/
    • Train/
      • melanoma/
      • nevus/
      • seborrheic_keratosis/
    • Val/
      • melanoma/
      • nevus/
      • seborrheic_keratosis/
    • Test/
      • melanoma/
      • nevus/
      • seborrheic_keratosis/
    • Train_Lesion/
      • melanoma/
      • nevus/
      • seborrheic_keratosis/
    • Train_Dermo/
      • melanoma/
      • nevus/
      • seborrheic_keratosis/

Under the folder Train_Lesion is the lesion segmentation map (ISIC2017 part I); under the folder Train_Dermo is the map of dermoscopic features (ISIC2017 part II). The raw data of dermoscopic features require some preprocessing in order to convert to binary maps. What is under Train_Dermo is the union map of four dermoscopic features. Note that not all of the images have dermoscopic features (i.e, some of the maps are all zero).

3. Training

  1. Training without any attention map regularization (with only the classification loss, i.e, AttnMel-CNN in the paper):
  • train on ISIC 2016
python train.py --dataset ISIC2016 --preprocess --over_sample --focal_loss --log_images
  • train on ISIC 2017 (by default)
python train.py --dataset ISIC2017 --preprocess --over_sample --focal_loss --log_images
  1. Training with attention map regularization (AttnMel-CNN-Lesion or AttnMel-CNN-Dermo in the paper):

We only train on ISIC2017 for these two models.

  • AttnMel-CNN-Lesion (by default)
python train_seg.py --seg lesion --preprocess --over_sample --focal_loss --log_images
  • AttnMel-CNN-Dermo
python train_seg.py --seg dermo --preprocess --over_sample --focal_loss --log_images
  1. Testing
python test.py --dataset ISIC2016  

or

python test.py --dataset ISIC2017  

LICENSE

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

About

IPMI2019 paper - Melanoma Recognition via Visual Attention

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages