You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is not a bug, but I just wanna know why you chose to use your customized Class to resized and convert the image. We know that torchvision.transforms has offered all the functionalities.
One guess is that the function from torchvision has problem for GPU memory leakage???
I have the out of memory problem when I am using these functionalities from torchvision.transforms.
Another question is about the DataLoader that you used for the slice-level or path-level experiments, it seems that with the option: shuffle=True for training, the shuffle will be done on the subject-level, not from all available slices, then all slices of these subjects will be sequentially extracted to train, thus the training process is not really shuffled... Let's say, we extract 100 slices from each MRI, then the batch size is 16. The shuffle will randomly gives us 16 subjects, then the first slice of the 16 subjects were trained, then the second.....last. These results in the same orders of labels for the 100 iterations...
Can you help me clarify these two questions?
Thanks in advance
The text was updated successfully, but these errors were encountered:
Hi,
This is not a bug, but I just wanna know why you chose to use your customized Class to resized and convert the image. We know that
torchvision.transforms
has offered all the functionalities.One guess is that the function from torchvision has problem for GPU memory leakage???
I have the out of memory problem when I am using these functionalities from
torchvision.transforms
.Another question is about the
DataLoader
that you used for the slice-level or path-level experiments, it seems that with the option:shuffle=True
for training, the shuffle will be done on the subject-level, not from all available slices, then all slices of these subjects will be sequentially extracted to train, thus the training process is not really shuffled... Let's say, we extract 100 slices from each MRI, then the batch size is 16. The shuffle will randomly gives us 16 subjects, then the first slice of the 16 subjects were trained, then the second.....last. These results in the same orders of labels for the 100 iterations...Can you help me clarify these two questions?
Thanks in advance
The text was updated successfully, but these errors were encountered: