Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Which standard configuration to use when comparing MISccn to other methods? #136

Open
davidkvcs opened this issue Mar 22, 2022 · 2 comments
Assignees
Labels
question Further information is requested

Comments

@davidkvcs
Copy link

Hi

I have MIScnn shortlisted as a candidate to be included on our work to find the best openly available method to autosegment head and neck cancer (HNC) tumors on PET-CT. My study includes 1100 HNC patients. We train on about ~850 of these, the rest is for testing.

  1. Can MIScnn be configured to handle multimodal input?
  2. Your setup allows for a lot of different configurations. Do you have a paper outlining configurations I should use for our problem?
    --- I have seen "MIScnn: a framework for medical image segmentation with convolutional neural networks and deep learning". Here you choose normalization, resampling, patch-size etc, but I am in doubt, if you would use these same configs, in a problem like ours, or when comparing to others? -- (If I may; How did you arrive at these specific config choices?)

If it is described somewhere how we should configure MIScnn for our problem, it would most likely be eligible for our study.

Thank in advance --

@muellerdo
Copy link
Member

muellerdo commented Mar 23, 2022

Hey @davidkvcs,

I have MIScnn shortlisted as a candidate to be included on our work to find the best openly available method to autosegment head and neck cancer (HNC) tumors on PET-CT.

Happy to hear that MIScnn can be useful for your studies.

While you can run MIScnn on your dataset with default parameters and obtain probably competitive results, I have to note that MIScnn is a framework/toolbox for building such pipelines and not an AutoML / autosegmentation software like the excellent nnU-Net from the DKFZ.

Can MIScnn be configured to handle multimodal input?

Yes. This can be achieved by implementing a custom IO Interface or by combining the different modalities into a single NIfTI file for each sample & using the NIfTI IO Interface provided by MIScnn.

You can find the BraTS2020 example (combines multiple MRI sequences) for multimodality datasets here:
https://github.com/frankkramer-lab/MIScnn/blob/master/examples/BraTS2020.multimodal.ipynb

Your setup allows for a lot of different configurations. Do you have a paper outlining configurations I should use for our problem?

Sadly, no. Depending on your specific dataset and disease type, optimal configurations can widely vary.

However, I can recommend you some "good-enough" / starting configurations for CT analysis.
For that, I would refer to our COVID-19 example: https://github.com/frankkramer-lab/covid19.MIScnn/blob/master/scripts/run_miscnn.py

Summary:

  • Full / extensive image augmentation
  • Z-Score Normalization
  • Resampling to (1.58, 1.58, 2.70) voxel spacing (depending if your slice axis is on the last axis here)
  • Patchwise-crop analysis with 160, 160, 80 patches
  • And a standard U-Net with tversky_crossentropy as loss function
  • For training, I would highly recommend some cross-validation bagging strategy with at least 200 iterations per epoch (which should be fine with your dataset size)
  • As callbacks: EarlyStopping & Dynamic learning rate
  • Normally, I would also recommend utilizing the hounsfield units for CT data with a clipping subfunction. However, if you are using PET+CT, you have to clip the CT data at first / beforehand with your desired HU range (else the clipping would be applied to both modalities in MIScnn)

Hope that I was able to help you & good luck on your further study.

Cheers,
Dominik

@muellerdo muellerdo self-assigned this Mar 23, 2022
@muellerdo muellerdo added the question Further information is requested label Mar 23, 2022
@joaomamede
Copy link

On this question (#1)
The example has patch_shape=(80, 160, 160).
This means the model takes each "channel" independently correct? It doesn't model through the 4 channels simultaneously.

I tried to add something like patch_shape=(2,80,160,160) but the UNet doesn't have the function for 4D.

Am I reading the situtation wrong?

Thanks so much for MIScnn it's great, we were able to segment organs from CT/PET scans!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants