Skip to content

tarun360/Adversarial-Attack-on-3D-U-Net-model-Brain-Tumour-Segmentation.

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Adversarial Attack on 3D U-Net model: Brain Tumour Segmentation.

Overview:

In this project, I have tested the robustness of 3D UNet model for medical image segmenation. UNet model has been very succesful for medical image segmentation. Here, we look at its robustness for brain tumour segmenation. The model was trained on the BraTS dataset. BraTS dataset contains MRI images of brain of size (240, 240, 155). For more information visit here.

Visualization:

The colors used in GIFs correspond to following:

  • Red is edema
  • Green is a non-enhancing tumour
  • Blue is an enhancing tumour.

Adversarial Attacks:

Adversarial attacks have been done on patches of size (160, 160, 16) to reduce computational time. In all the methods below, the parameters were as follows:

  • Iterations: 10
  • Epsilon: 0.2
  • Alpha: 0.02

Iterative Fast Gradient Sign Method (iFGSM):

  • Dice Coefficient of prediction before attack: 0.7913155
  • Dice Coefficient of prediction after attack: 0.43321887

Ground Truth:

GIF of Ground Truth for iFGSM

Prediction before attack:

GIF of Ground Truth for iFGSM

Prediction after attack:

GIF of Ground Truth for iFGSM

Targeted Iterative Fast Gradient Sign Method (tiFGSM):

  • Dice Coefficient of prediction before attack: 0.8463957
  • Dice Coefficient of prediction after attack: 0.53347206

Ground Truth:

GIF of Ground Truth for iFGSM

Prediction before attack:

GIF of Ground Truth for iFGSM

Prediction after attack:

GIF of Ground Truth for iFGSM

Carlini and Wagner Attack (CW):

  • Dice Coefficient of prediction before attack: 0.840131
  • Dice Coefficient of prediction after attack: 0.45113495

Ground Truth:

GIF of Ground Truth for iFGSM

Prediction before attack:

GIF of Ground Truth for iFGSM

Prediction after attack:

GIF of Ground Truth for iFGSM

References:

First half of the code dealing with building model in Notebook.ipynb and util.py has been borrowed from here.

For more information regarding the algorithms and model used in this project, one can read the papers below:

  • U-Net: Convolutional Networks for Biomedical Image Segmentation (link).
  • Adversarial examples in the physical world (link).
  • Towards Evaluating the Robustness of Neural Networks (link).