Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using a 2D U-Net for KiTS21 #151

Open
dannyhow12 opened this issue May 10, 2022 · 0 comments
Open

Using a 2D U-Net for KiTS21 #151

dannyhow12 opened this issue May 10, 2022 · 0 comments

Comments

@dannyhow12
Copy link

dannyhow12 commented May 10, 2022

Hi and good day @muellerdo ,

I have managed to train a 3D U-Net for KiTS21 dataset, and would like to try with a 2D U-Net.

Attached below is my source code and the error log shown.

import tensorflow as tf
import os
from tensorflow.python.keras.saving.saving_utils import model_metadata
from miscnn.data_loading.interfaces.nifti_io import NIFTI_interface
from miscnn.data_loading.interfaces import NIFTIslicer_interface
from miscnn.data_loading.data_io import Data_IO
from miscnn.processing.data_augmentation import Data_Augmentation
from miscnn.processing.subfunctions.normalization import Normalization
from miscnn.processing.subfunctions.clipping import Clipping
from miscnn.processing.subfunctions.resampling import Resampling
from miscnn.processing.preprocessor import Preprocessor
from miscnn.neural_network.model import Neural_Network
from miscnn.neural_network.architecture.unet.standard import Architecture
from miscnn.neural_network.metrics import dice_soft, dice_crossentropy, tversky_loss
from tensorflow.keras.callbacks import ReduceLROnPlateau
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras.callbacks import ModelCheckpoint
#For callbacks 
from tensorflow.keras.callbacks import BackupAndRestore
os.environ["CUDA_VISIBLE_DEVICES"] = "0"

#Initialize the NIfTI I/O interface and configure the images as one channel (grayscale) and three segmentation classes (background, kidney, tumor)
# def __init__(self, channels=1, classes=2, three_dim=True, pattern=None):
interface = NIFTIslicer_interface(pattern="case_00[0-9]*", channels=1, classes=3)

data_path = "/content/drive/MyDrive/data_testing/"


#Create the Data I/O object 
#def __init__(self, interface, input_path, output_path="predictions", batch_path="batches", delete_batchDir=True):
#delete_batchDir, if false, only the batches with the associated seed will be deleted
data_io = Data_IO(interface, input_path = data_path)

sample_list = data_io.get_indiceslist()
sample_list.sort()
listing = os.listdir(data_path)
no_files = len(listing)
print ("This: ",no_files) 
print("This list: ", sample_list)

#Create a pixel value normalization Subfunction through Z-Score 
sf_normalize = Normalization(mode='z-score')
#Create a clipping Subfunction between -79 and 304
sf_clipping = Clipping(min=-79, max=304)
#Create a resampling Subfunction to voxel spacing 3.22 x 1.62 x 1.62
#sf_resample = Resampling((3.22, 1.62, 1.62))

#Assemble Subfunction classes into a list
#Be aware that the Subfunctions will be exectued according to the list order!
subfunctions = [sf_clipping, sf_normalize]


#Create and configure the Preprocessor class
#Prepare_subfunctions to prepare the dataset and backup into disk
#for 2D, no patches required
pp = Preprocessor(data_io, data_aug = None, batch_size=1, subfunctions=subfunctions, prepare_subfunctions=True, 
                  prepare_batches=False, analysis="fullimage", use_multiprocessing=False)

#Create the Neural Network model
#def __init__(self, n_filters=32, depth=4, activation='softmax',batch_normalization=True):
unet_standard = Architecture(depth=4, activation="softmax", batch_normalization=True)

model = Neural_Network(preprocessor=pp, architecture=unet_standard, loss=tversky_loss, metrics=[dice_soft, dice_crossentropy], learning_rate=0.0001) #dice soft for metrics then dcross

#Define Callbacks
cb_lr = ReduceLROnPlateau(monitor='loss', factor=0.1, patience=20, verbose=1, mode='min', min_delta=0.0001, cooldown=1, min_lr=0.00001)
cb_es = EarlyStopping(monitor='loss', min_delta=0, patience=150, verbose=1, mode='min')
cb_cp = ModelCheckpoint("/content/drive/MyDrive/2DUnet.{epoch:03d}.hdf5", monitor='val_loss', verbose=1, save_freq='epoch')

training_samples = sample_list[:int(len(sample_list)*0.7)]
validation_samples = sample_list[:int(len(sample_list)*0.3)]
print (training_samples, validation_samples)

history = model.evaluate(training_samples,validation_samples, epochs=10, callbacks=[cb_lr, cb_es,cb_cp])

With this, I am able to slice each 3D CT scan of the kidney into its corresponding slices. However, if I would want to load the images to train, how would I do it to ensure all slices of the 10 samples are taken into the training?
I have referenced to issue #25 which used NIFTIslicer_interface, but I am unsure on how to load the dataset into training_samples and validation_samples.
Referencing to issue #34 , each slices will correspond to a new sample (Eg: the case00000 of KiTS21 has 611 slices, thus 611 images).

An example that I have now is that I am using 10 samples, where each samples are sliced according to its number of slices. (corresponding to 3164 slices / images)

However, if I were to split it by percentage, there may be a possibility that a slice from patient A could be in the validation sample list, as mentioned by issue #34 . Is there any workaround for this at the moment?

Regards,
Danny

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant