You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have MIScnn shortlisted as a candidate to be included on our work to find the best openly available method to autosegment head and neck cancer (HNC) tumors on PET-CT. My study includes 1100 HNC patients. We train on about ~850 of these, the rest is for testing.
Can MIScnn be configured to handle multimodal input?
Your setup allows for a lot of different configurations. Do you have a paper outlining configurations I should use for our problem?
--- I have seen "MIScnn: a framework for medical image segmentation with convolutional neural networks and deep learning". Here you choose normalization, resampling, patch-size etc, but I am in doubt, if you would use these same configs, in a problem like ours, or when comparing to others? -- (If I may; How did you arrive at these specific config choices?)
If it is described somewhere how we should configure MIScnn for our problem, it would most likely be eligible for our study.
Thank in advance --
The text was updated successfully, but these errors were encountered:
I have MIScnn shortlisted as a candidate to be included on our work to find the best openly available method to autosegment head and neck cancer (HNC) tumors on PET-CT.
Happy to hear that MIScnn can be useful for your studies.
While you can run MIScnn on your dataset with default parameters and obtain probably competitive results, I have to note that MIScnn is a framework/toolbox for building such pipelines and not an AutoML / autosegmentation software like the excellent nnU-Net from the DKFZ.
Can MIScnn be configured to handle multimodal input?
Yes. This can be achieved by implementing a custom IO Interface or by combining the different modalities into a single NIfTI file for each sample & using the NIfTI IO Interface provided by MIScnn.
Resampling to (1.58, 1.58, 2.70) voxel spacing (depending if your slice axis is on the last axis here)
Patchwise-crop analysis with 160, 160, 80 patches
And a standard U-Net with tversky_crossentropy as loss function
For training, I would highly recommend some cross-validation bagging strategy with at least 200 iterations per epoch (which should be fine with your dataset size)
As callbacks: EarlyStopping & Dynamic learning rate
Normally, I would also recommend utilizing the hounsfield units for CT data with a clipping subfunction. However, if you are using PET+CT, you have to clip the CT data at first / beforehand with your desired HU range (else the clipping would be applied to both modalities in MIScnn)
Hope that I was able to help you & good luck on your further study.
On this question (#1)
The example has patch_shape=(80, 160, 160).
This means the model takes each "channel" independently correct? It doesn't model through the 4 channels simultaneously.
I tried to add something like patch_shape=(2,80,160,160) but the UNet doesn't have the function for 4D.
Am I reading the situtation wrong?
Thanks so much for MIScnn it's great, we were able to segment organs from CT/PET scans!
Hi
I have MIScnn shortlisted as a candidate to be included on our work to find the best openly available method to autosegment head and neck cancer (HNC) tumors on PET-CT. My study includes 1100 HNC patients. We train on about ~850 of these, the rest is for testing.
--- I have seen "MIScnn: a framework for medical image segmentation with convolutional neural networks and deep learning". Here you choose normalization, resampling, patch-size etc, but I am in doubt, if you would use these same configs, in a problem like ours, or when comparing to others? -- (If I may; How did you arrive at these specific config choices?)
If it is described somewhere how we should configure MIScnn for our problem, it would most likely be eligible for our study.
Thank in advance --
The text was updated successfully, but these errors were encountered: