Skip to content

shivangi-aneja/Multi-Modal-Brain-Segmentation

Repository files navigation

3D Multi Modal Brain Structure Segmentation using Adversarial Learning

This work has been accepted at 14th WiML Workshop, NeurIPS Conference 2019. Please find the poster here.

Requirements

  • The code has been written in Python and Tensorflow
  • Install all the libraries given in requirement.txt (You can do so by the following command)
pip install -r requirement.txt

Dataset

The annotated dataset was proviede by MR Brains 2018 for Grand Challenge on MR Brain Segmentation at MICCAI 2018. This data consists of 7 sets of annotated brain MR images (T1, T1 inversion recovery, and T2-FLAIR) with manual segmentations. These manual segmentations have been made by experts in brain segmentation. Images were acquired on a 3T scanner at the UMC Utrecht (the Netherlands).

The unannotated dataset is provided by WMH Segmentation Challenge. This data consists of brain MR images (T1 and T2-FLAIR). So we used only two modalities for training.

How to use the code?

  • Download the dataset and place it in data folder.
$ python normalize_data.py
  • The preprocessed images will be stored in mrbrains_normalized folder
  • You can run standard 3D U-Net(baseline) & 3D GAN with this code.

3D U-Net

The architecture of 3D Unet used is shown in the figure below.

How to run 3D U-Net?

$ cd multi_modal_gan
  • Configure the flags according to your experiment.
  • To run training
$ python train_3dunet.py --training
  • This will train your model and save the best checkpoint according to your validation performance.
  • You can run the testing to predict segmented output which will be saved in your result folder as ".nii.gz" files.
  • To run testing
$ python train_3dunet.py--testing
  • This code computes dice coefficient to evaluate the testing performance. Once the output segmented images are created you can use them to compute any other evaluation metrics : Hausdorff Distance and Volumetric Similarity

3D GAN

The architecture of 3D GAN used is shown in figure below. Parts of code are referenced from here.

How to run 3D GAN?

$ cd multi_modal_gan
  • Configure the flags according to your experiment.
  • To run training
$ python train_3dgan.py --training
  • By default it trains Feature Matching GAN based model. To train the bad GAN based model
$ python train_3dgan.py --training --badGAN
  • To run testing
$ python train_3dgan.py --testing

Results

Loss Curves

The training curves are shown in the figure below

Dice Score Comparison over epochs on Validation Set

Visual comparison of the segmentation by 3D Unet vs 3D GAN

3D UNET 3D GAN Ground Truth

Contact

You can mail me at: shivangi.tum@gmail.com

[1] Few-shot 3D Multi-modal Medical Image Segmentation using Generative Adversarial Learning

About

3D Multi-modal (FLAIR and T1) Brain MRI Scan Segmentation using Generative Adversarial Learning

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages