Skip to content

PARIETAL: Yet another deeP leARnIng brain ExTrAtion tooL

License

Notifications You must be signed in to change notification settings

sergivalverde/PARIETAL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

52 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PARIETAL

PARIETAL: Yet another deeP leARnIng brain ExTrAtion tooL đź‘Żđź‘Ż

Motivation: why another skull-stripping method?

During the last years, we have been using available state-of-the-art skull-stripping tools happily. However, in deep learning pipelines, most of the internal processes are computed very fast thanks to the use of GPUs (lesion segmentation, tissue segmentation, etc..), so brain extraction tends to be orders of magnitude slower than the rest of the GPU based pipeline processes. The main motivation behind PARIETAL is to have a fast and robust skull-stripping method that could be incorporated in our deep learning pipelines.

Fortunately, various brain MRI datasets have been released, such as the Calgary-Campinas-359 dataset, permitting researchers to train deep learning models that will hopefully improve both performance and processing time. Although different deep learning methods have been proposed already for accurate brain extraction. PARIETAL is yet another one, yielding fast and accurate outputs, regardless of the imaging brain extraction protocol. In order to validate our proposed method, we have carried different experiments using the trained model on the Campinas dataset, analyzing the capability of the learned architecture on unseen data and varied image acquisition protocols.

Architecture:

PARIETAL is a patch-based residual network 3D-UNET with ~10M parameters (see Figure). We have trained the model using the silver-masks provided by the Calgary-Campinas-359 dataset. This dataset consists of 359 images of healthy adults (29-80 years) acquired on Siemens, Philips and General Electric scanners at both 1.5T and 3T (see Souza et al. 2017 for more information about the dataset). media/unet_architecture.png

Training/inference characteristics:

  • Input modalities: T1-w
  • Training patch size: 32x32x32
  • Training sampling: balanced training, same number of brain and skull samples (non-brain) after sampling at 16x16x16
  • Optimizer: Adadelta
  • Training batch size: 32
  • Training epochs: 200
  • Train loss: cross entropy
  • Early stopping: 50 epochs (based on validation DSC)
  • Inference patch size: 32x32x32
  • Inference sampling: 16x16x16

Installation:

We implemented PARIETAL in Python using the Pytorch deep learning toolkit. All necessary packages can be installed from pip as follows:

pip install -r requirements

How to use it:

As an standalone script:

To use PARIETAL as an standalone script, just run ./parietal --help to see all the available options:

/path/to/parietal --help

Mandatory parameters:

  • input_scan (--input_image): T1-w nifti image to process
  • output_scan (--out_name): Output name for the skull-stripped image

Optional parameters:

  • binary threshold (--threshold): output threshold used to binarize the skull-stripped image (default=0.5)
  • gpu use (--gpu): use GPU for faster inference (default=No)
  • gpu number (--gpu_number): which GPU number to use (default=0)
  • verbose (--verbose): show useful information

As a Python library:

To use PARIETAL as a Python library, just import the BrainExtraction class from your script.

from brain_extraction import BrainExtraction

b = BrainExtraction()

input_scan = 'tests/example/T1.nii.gz'
output_scan = 'tests/example/parietal_brainmask.nii.gz'

# The result is stored both in disk at output_scan path and returned
# as np.array
brainmask = b.run(input_scan, ouput_scan)

In order to facilitate its use in larger experiments, the method’s options can be set by default from a configuration file stored at config/config.cfg. Class declaration arguments overwrite the default configuration:

[data]
normalize = True
out_threshold = 0.5
workers = 10

[model]
model_name = campinas_baseline_s2_multires
sampling_step = 16
patch_shape = 32
use_gpu = True
gpu_number = 0

GPU vs CPU use:

The model can run with both GPU or a decent CPU. In most of our experiments, PARIETAL can extract the brain out of T1-w image in less than 10 seconds when using GPU and about 2 minutes when running on the CPU (see performance experiments for a more complete analysis).

Docker version:

In order to reduce the hassle to install all the dependencies in your local machine, we also provide a Docker version. Please follow the guide to install Docker for your operating system. If you are on Linux and you want to use the GPU capabilities of your local machine, please be sure that you install the nvidia-docker (version 2.0) packages.

Once Docker is available in your system, install the minimum Python dependencies as:

pip install pyfiglet docker

Then, running PARIETAL is as easy as an standalone script: (note: the first time you run the script, this may take some time to run because it will download the Docker image locally in your system).

/path/to/parietal_docker --help

Mandatory parameters:

  • input_scan (--input_image): T1-w nifti image to process
  • output_scan (--out_name): Output name for the skull-stripped image

Optional parameters:

  • binary threshold (--threshold): output threshold used to binarize the skull-stripped image (default=0.5)
  • gpu use (--gpu): use GPU for faster inference (default=No)
  • gpu number (--gpu_number): which GPU number to use (default=0)
  • verbose (--verbose): show useful information

Performance:

We have compared the performance of PARIETAL with several publicly available state-of-the-art tools and also against some other deep learning methods. To do so, we have run PARIETAL on different public available datasets such as OASIS, LPBA40 and the Campinas dataset.

Campinas dataset:

Performance evaluation against the 12 manual masks from the Campinas dasaset. We extract values for other methods from the Lucena et al. 2019 paper:

methodDiceSensitivitySpecificity
ANTs95.9394.5199.70
BEAST95.7793.8499.76
BET95.2298.2699.13
BSE90.4891.4498.64
HWA91.6699.9397.83
MBWSS95.5792.7899.48
OPTIBET95.4396.1399.37
ROBEX95.6198.4299.13
STAPLE (previous)96.8098.9899.38
Silver-masks97.1396.8299.70
CONSNet97.1898.9199.46
PARIETAL97.2396.7397.75

LPBA40 dataset:

Performance evaluation against the 40 manual masks from the LPBA40 dasaset.  Values for the rest of the methods are extracted from the Lucena et al. 2019 paper:

methodDiceSensitivitySpecificity
ANTs97.2598.9899.17
BEAST96.3094.0699.76
BET96.6297.2399.27
HWA92.5199.8997.02
MBWSS96.2494.4099.68
OPTIBET95.8793.3599.74
ROBEX96.7796.5099.50
STAPLE (previous)97.5998.1499.46
CONSNet (Campinas model)97.3598.1499.45
CONSNet (trained on LPBA40)98.4798.5599.75
auto UNET Salehi (trained on LPBA40)97.7398.3199.48
Unet Salehi (trained on LPBA40)96.7997.2299.34
3DCNN Kleesiek (trained on LPBA40)96.9697.4699.41
PARIETAL (Campinas model)97.2596.1098.40

OASIS dataset:

Similar to the previous datasets, we also show the performance of PARIETAL against the 77 brainmasks of the OASIS dataset. Values for the rest of the methods are extracted from the Lucena et al. 2019 paper:

methodDiceSensitivitySpecificity
ANTs95.3094.3998.73
BEAST92.4686.7699.70
BET93.5092.6398.10
HWA93.9598.3696.12
MBWSS90.2484.0999.35
OPTIBET94.4591.519.22
ROBEX95.5593.9599.06
STAPLE (previous)96.0995.1898.98
CONSNet (Campinas model)95.5493.9899.05
CONSNet (trained on OASIS)97.1497.4598.88
auto UNET Salehi (trained on OASIS)97.6298.6698.77
Unet Salehi (trained on OASIS)96.2297.2998.27
3DCNN Kleesiek (trained on OASIS)95.0292.4099.28
PARIETAL (Campinas model)92.5587.4098.51

In contrast to the previous datasets, OASIS masks were not manually annotated, so the results of PARIETAL using the Campinas trained model were limited, mostly due to inconsistencies between labelling protocols 🤷‍♂️ (see Figure):

media/oasis_masks.png

To further illustrate such an issue, we retrained the model using the 77 brain masks of the OASIS dataset using a two-fold cross-validation methodology. We followed the same approach done in Kleesiek et al. 2016, Salehi et al. 2017 and Lucena et al. 2019, i.e. a two-fold cross-validation strategy for assessing our model. After retraining, the performance of PARIETAL was similar or better than other deep learning methods:

methodDiceSensitivitySpecificity
CONSNet (Campinas model)95.5493.9899.05
CONSNet (trained on OASIS)97.1497.4598.88
auto UNET Salehi (trained on OASIS)97.6298.6698.77
Unet Salehi (trained on OASIS)96.2297.2998.27
3DCNN Kleesiek (trained on OASIS)95.0292.4099.28
PARIETAL (Campinas model)92.5587.4098.51
PARIETAL (trained on OASIS)97.9997.8498.14

Processing time:

Finally, we analyze the processing time (in seconds) of the proposed architecture against other methods in the field. For the PARIETAL method, we show the processing times with/without loading the model in the GPU for each new sample. This is the case when the model is not used in batch mode (to implement).

Processing times from all methods, but PARIETAL, have been extracted from Lucena et al. 2019 paper, where the authors report the use of a workstation equipped with a Xeon E3-1220 v3, 4x3.10Ghz, Intel). GPU resources are identical for all the deep learning methods (NVIDIA TITAN-X GPU, 12GB).

methodCampinasOASISLPBA40
ANTs137810251135
BEAST1128944905
BET957
BSE211
HWA846248281
MBWSS1356679
OPTIBET773579679
ROBEX605357
CONSNet (GPU)251836
CONSNet (CPU)516214301
PARIETAL (GPU)1279
PARIETAL (GPU + model load)171214
PARIETAL (CPU)129122141

References:

  1. Souza, R., Lucena, O., Garrafa, J., Gobbi, D., Saluzzi, M., Appenzeller, S., … Lotufo, R. (2017). An open, multi-vendor, multi-field-strength brain MR dataset and analysis of publicly available skull stripping methods agreement. NeuroImage, 170, 482–494. (link)
  2. Lucena, O., Souza, R., Rittner, L., Frayne, R., & Lotufo, R. (2019). Convolutional neural networks for skull-stripping in brain MR imaging using silver standard masks. Artificial Intelligence in Medicine, 98(August 2018), 48–58. (link)
  3. Sadegh, S., Salehi, M., Member, S., Erdogmus, D., Member, S., Gholipour, A., & Member, S. (2017). Auto-context Convolutional Neural Network (Auto-Net) for Brain Extraction in Magnetic Resonance Imaging, 0062(c), 1–12. (link)
  4. Kleesiek, J., Urban, G., Hubert, A., Schwarz, D., Maier-Hein, K., Bendszus, M., & Biller, A. (2016). Deep MRI brain extraction: A 3D convolutional neural network for skull stripping. NeuroImage, 129, 460–469. (link)

Versions:

  • v0.1: first usable version
  • v0.2: multi-resolution training
  • V0.3: docker capabilities and paper cleanup