Skip to content
This repository has been archived by the owner on May 22, 2023. It is now read-only.

petteriTeikari/voxelmorph_CT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

voxelmorph

This is for the "MICCAI diffeomorphic model", i.e. still have to use Tensorflow (June 2020)

For original notes on this repository see https://github.com/voxelmorph/voxelmorph

SETUP

  1. Export your voxelmorph path, e.g.

    export PYTHONPATH=$PYTHONPATH:.../voxelmorph/ext/neuron/:.../voxelmorph/ext/pynd-lib/:.../voxelmorph/ext/pytools-lib/

  2. The original repo only has MRI atlases so we have couple of options now, easiest to use the template from John Hopkins https://github.com/muschellij2/high_res_ct_template created from the Qure500 dataset. Original template resolution (JohnHopkins_CQ500_template_0.5mm.nii.gz in this repo, and template.nii.gz in original Hopkins repo) was 0.5 mm^3, and it was now downsampled with [resample_template.py](resample_template.py) to 1.0 mm^3 to JohnHopkins_CQ500_template_1.0mm.nii.gz. As well as to 2.00 mm^3 which might be your only option on home/consumer-GPU as the 1.00 mm^3 training won't work with 8GB of GPU RAM.

  3. This repo reads volumes from .npzs rather than from nii.gzs directly, thus there is a little tool to do the conversion: [convert_niftis_to_npz.py](convert_niftis_to_npz.py). That now converts only mask at once, TODO! update to include all the masks if you need the masks as .npzs as well

TRAINING

  1. Start training with [src/train_CT_sCROMIS2ICH.py](src/train_CT_sCROMIS2ICH.py), e.g. with the following command:

python train_CT_sCROMIS2ICH.py /home/petteri/Dropbox/manuscriptDrafts/CThemorr/DATA_DVC/sCROMIS2ICH/CT/labeled/MNI_2mm_128vx-3D/data/BM4D_brainWeighed_nonNaN_-100_npz --gpu 0

And by default the model with continue training from pre-trained model [models/sCROMIS2ICH_2mm_128vx.h5](sCROMIS2ICH_2mm_128vx.h5) and you can set its default argument to None if you wish to start from scratch.

That model (at 2 mm^3 resolution) was terminated to (all 209 volumes being in the training split):

loss: 8.6873 - spatial_transformer_1_loss: 0.8157 - concatenate_5_loss: 7.8716

After ~800 epochs at lr=1e-4, ~250 epochs at lr=1e-5, ~100 epochs at lr=1e-6, and could be probably fine-tuned a bit but maybe not too much with a LR Scheduler and some ensembling.

INFERENCE

Image pairs (input data vs. template) are registered with the register.py, e.g.

python register.py --gpu 0 ../data/test_vol_01006_2mm.nii.gz ../data/JohnHopkins_CQ500_template_2.0mm_norm.nii.gz --out_img ../data/test_vol_01006_2mm_registered.nii.gz --model_file ../models/sCROMIS2ICH_2mm_128vx.h5 --out_warp ../data/test_vol_01006_2mm_warp_field.nii.gz

or cleaner with the default files

python register.py --gpu 0 ../data/test_vol_01006_2mm.nii.gz ../data/JohnHopkins_CQ500_template_2.0mm_norm.nii.gz

and there is batch_register.py script that registers all files in the input directory against the chosen atlas (needs to be the same as used for training):

python batch_register.py --gpu 0 /home/petteri/Dropbox/manuscriptDrafts/CThemorr/DATA_DVC/sCROMIS2ICH/CT/labeled/MNI_2mm_128vx-3D/data/BM4D_brainWeighed_nonNaN_-100 ../data/JohnHopkins_CQ500_template_2.0mm_norm.nii.gz

This saves both the registered image, and the warp field to disk. You could probably make the code more robust for exceptions.

TODO!

  • Not very sophisticated code atm as the original probablistic diffeomorphic implementation was in TensorFlow and I did not want to use too much time with Tensorflow, and just quickly validate the idea that this works for CT. The original authors had done PyTorch version of the non-diffeomorphic voxelmorph and maybe the diffeomorphic ones pops up too soon, so seems waste of time to do Tensorflow work? Depends how eagerly you need this, and whether this actually generalizes that well, and you would like to put your bets on Mikael Brudfors et al. 2020: "Flexible Bayesian Modelling for Nonlinear Image Registration "

  • And this you probably would eventually want to combine end-to-end with restoration and segmentation (e.g. Estienne et al. 2020: "Deep Learning-Based Concurrent Brain Registration and Tumor Segmentation" + Github, Keras) , so don't know how much you want to over-optimize this part?

  • Notice that original authors had played with hyperparameters (image_sigma=0.01 and prior_lambda=25 for "MRI MICCAI 2018"), and you could try to optimize these for CT (see e.g. Pytorch Lighting default tool and Optuna?

  • My GPU ram was not sufficient for the 1mm³ model, thus that will come with DGX-1 eventually. "John Hopkins" template was for 0.5 mm³ so that option is out there as well

  • The vanilla implementation does not seem to be doing such a good job with the hematoma that is pretty much gone after `voxelmorph'. In the template (clipped to 0,100 HU), there is no hematoma, which might explain this behavior?

CT example

VoxelMorph Papers

If you use voxelmorph or some part of the code, please cite (see bibtex):