Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Task058 have RuntimeError: Given transposed=1, weight of size [128, 64, 1, 2, 2], expected input[2, 64, 16, 160, 160] to have 128 channels, but got 64 channels instead #65

Open
WangHuikong opened this issue May 27, 2021 · 8 comments

Comments

@WangHuikong
Copy link

I try to train Electron Microscopy dataset by use Task058_ISBI_EM_SEG.py, but occur RuntimeError: Given transposed=1, weight of size [128, 64, 1, 2, 2], expected input[2, 64, 16, 160, 160] to have 128 channels, but got 64 channels instead

UnetPlusPlus code has anly update ? Or need to modify something?
Thank you very mach!

My command :
python3.7 Task058_ISBI_EM_SEG.py
nnUNet_plan_and_preprocess -t 058 --verify_dataset_integrity
nnUNet_train 3d_fullres nnUNetPlusPlusTrainerV2 Task058_ISBI_EM_SEG 0 --npz

print logs:
###############################################
I am running the following nnUNet: 3d_fullres
My trainer class is: <class 'nnunet.training.network_training.nnUNetPlusPlusTrainerV2.nnUNetPlusPlusTrainerV2'>
For that I will be using the following configuration:
num_classes: 1
modalities: {0: 'EM'}
use_mask_for_norm OrderedDict([(0, False)])
keep_only_largest_region None
min_region_size_per_class None
min_size_per_class None
normalization_schemes OrderedDict([(0, 'nonCT')])
stages...

stage: 0
{'batch_size': 2, 'num_pool_per_axis': [2, 6, 6], 'patch_size': array([ 16, 320, 320]), 'median_patient_size_in_voxels': array([ 30, 512, 512]), 'current_spacing': array([50., 4., 4.]), 'original_spacing': array([50., 4., 4.]), 'do_dummy_2D_data_aug': True, 'pool_op_kernel_sizes': [[1, 2, 2], [1, 2, 2], [1, 2, 2], [2, 2, 2], [2, 2, 2], [1, 2, 2]], 'conv_kernel_sizes': [[1, 3, 3], [1, 3, 3], [1, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3]]}

I am using stage 0 from these plans
I am using sample dice + CE loss

I am using data from this folder: /UnetPlusPlus/UNetPlusPlus-master/UNetPlusPlus-master/pytorch/dataset/nnUNet_preprocessed/Task058_ISBI_EM_SEG/nnUNetData_plans_v2.1
###############################################
14:21:02.764737: Using dummy2d data augmentation
[[1, 2, 2], [1, 2, 2], [1, 2, 2], [2, 2, 2], [2, 2, 2], [1, 2, 2]]
loading dataset
loading all case properties
unpacking dataset
done
6
6
256
2
<class 'nnunet.network_architecture.generic_UNetPlusPlus.ConvDropoutNormNonlin'>
<class 'torch.nn.modules.conv.ConvTranspose3d'>
6
128
2
<class 'nnunet.network_architecture.generic_UNetPlusPlus.ConvDropoutNormNonlin'>
<class 'torch.nn.modules.conv.ConvTranspose3d'>
6
4
5
weight_decay: 3e-05
14:21:06.468369: lr: 0.01
using pin_memory on device 0
using pin_memory on device 0
14:21:08.702853: Unable to plot network architecture:
14:21:08.703407: No module named 'hiddenlayer'
14:21:08.703701:
printing the network instead:

14:21:08.703849: Generic_UNetPlusPlus(
(loc0): ModuleList(
(0): Sequential(
(0): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(640, 320, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1])
(instnorm): InstanceNorm3d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
(1): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(320, 320, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1])
(instnorm): InstanceNorm3d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
)
(1): Sequential(
(0): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(960, 320, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1])
(instnorm): InstanceNorm3d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
(1): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(320, 320, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1])
(instnorm): InstanceNorm3d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
)
(2): Sequential(
(0): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(1024, 256, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1])
(instnorm): InstanceNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
(1): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(256, 256, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1])
(instnorm): InstanceNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
)
(3): Sequential(
(0): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(640, 128, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1])
(instnorm): InstanceNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
(1): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(128, 128, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1])
(instnorm): InstanceNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
)
(4): Sequential(
(0): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(384, 64, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1])
(instnorm): InstanceNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
(1): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(64, 64, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1])
(instnorm): InstanceNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
)
(5): Sequential(
(0): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(224, 32, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1])
(instnorm): InstanceNorm3d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
(1): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(32, 32, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1])
(instnorm): InstanceNorm3d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
)
)
(loc1): ModuleList(
(0): Sequential(
(0): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(640, 320, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1])
(instnorm): InstanceNorm3d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
(1): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(320, 320, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1])
(instnorm): InstanceNorm3d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
)
(1): Sequential(
(0): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(768, 256, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1])
(instnorm): InstanceNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
(1): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(256, 256, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1])
(instnorm): InstanceNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
)
(2): Sequential(
(0): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(512, 128, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1])
(instnorm): InstanceNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
(1): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(128, 128, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1])
(instnorm): InstanceNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
)
(3): Sequential(
(0): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(320, 64, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1])
(instnorm): InstanceNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
(1): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(64, 64, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1])
(instnorm): InstanceNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
)
(4): Sequential(
(0): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(192, 32, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1])
(instnorm): InstanceNorm3d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
(1): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(32, 32, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1])
(instnorm): InstanceNorm3d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
)
)
(loc2): ModuleList(
(0): Sequential(
(0): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(512, 256, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1])
(instnorm): InstanceNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
(1): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(256, 256, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1])
(instnorm): InstanceNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
)
(1): Sequential(
(0): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(384, 128, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1])
(instnorm): InstanceNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
(1): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(128, 128, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1])
(instnorm): InstanceNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
)
(2): Sequential(
(0): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(256, 64, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1])
(instnorm): InstanceNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
(1): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(64, 64, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1])
(instnorm): InstanceNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
)
(3): Sequential(
(0): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(160, 32, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1])
(instnorm): InstanceNorm3d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
(1): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(32, 32, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1])
(instnorm): InstanceNorm3d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
)
)
(loc3): ModuleList(
(0): Sequential(
(0): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(256, 128, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1])
(instnorm): InstanceNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
(1): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(128, 128, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1])
(instnorm): InstanceNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
)
(1): Sequential(
(0): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(192, 64, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1])
(instnorm): InstanceNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
(1): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(64, 64, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1])
(instnorm): InstanceNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
)
(2): Sequential(
(0): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(128, 32, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1])
(instnorm): InstanceNorm3d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
(1): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(32, 32, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1])
(instnorm): InstanceNorm3d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
)
)
(loc4): ModuleList(
(0): Sequential(
(0): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(128, 64, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1])
(instnorm): InstanceNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
(1): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(64, 64, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1])
(instnorm): InstanceNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
)
(1): Sequential(
(0): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(96, 32, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1])
(instnorm): InstanceNorm3d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
(1): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(32, 32, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1])
(instnorm): InstanceNorm3d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
)
)
(conv_blocks_context): ModuleList(
(0): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(1, 32, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1])
(instnorm): InstanceNorm3d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
(1): ConvDropoutNormNonlin(
(conv): Conv3d(32, 32, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1])
(instnorm): InstanceNorm3d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
(1): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(32, 64, kernel_size=[1, 3, 3], stride=[1, 2, 2], padding=[0, 1, 1])
(instnorm): InstanceNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
(1): ConvDropoutNormNonlin(
(conv): Conv3d(64, 64, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1])
(instnorm): InstanceNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
(2): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(64, 128, kernel_size=[1, 3, 3], stride=[1, 2, 2], padding=[0, 1, 1])
(instnorm): InstanceNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
(1): ConvDropoutNormNonlin(
(conv): Conv3d(128, 128, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1])
(instnorm): InstanceNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
(3): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(128, 256, kernel_size=[3, 3, 3], stride=[1, 2, 2], padding=[1, 1, 1])
(instnorm): InstanceNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
(1): ConvDropoutNormNonlin(
(conv): Conv3d(256, 256, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1])
(instnorm): InstanceNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
(4): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(256, 320, kernel_size=[3, 3, 3], stride=[2, 2, 2], padding=[1, 1, 1])
(instnorm): InstanceNorm3d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
(1): ConvDropoutNormNonlin(
(conv): Conv3d(320, 320, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1])
(instnorm): InstanceNorm3d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
(5): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(320, 320, kernel_size=[3, 3, 3], stride=[2, 2, 2], padding=[1, 1, 1])
(instnorm): InstanceNorm3d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
(1): ConvDropoutNormNonlin(
(conv): Conv3d(320, 320, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1])
(instnorm): InstanceNorm3d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
(6): Sequential(
(0): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(320, 320, kernel_size=[3, 3, 3], stride=[1, 2, 2], padding=[1, 1, 1])
(instnorm): InstanceNorm3d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
(1): StackedConvLayers(
(blocks): Sequential(
(0): ConvDropoutNormNonlin(
(conv): Conv3d(320, 320, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1])
(instnorm): InstanceNorm3d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
(lrelu): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
)
)
)
(td): ModuleList()
(up0): ModuleList(
(0): ConvTranspose3d(320, 320, kernel_size=[1, 2, 2], stride=[1, 2, 2], bias=False)
(1): ConvTranspose3d(320, 320, kernel_size=[2, 2, 2], stride=[2, 2, 2], bias=False)
(2): ConvTranspose3d(320, 256, kernel_size=[2, 2, 2], stride=[2, 2, 2], bias=False)
(3): ConvTranspose3d(256, 128, kernel_size=[1, 2, 2], stride=[1, 2, 2], bias=False)
(4): ConvTranspose3d(128, 64, kernel_size=[1, 2, 2], stride=[1, 2, 2], bias=False)
(5): ConvTranspose3d(64, 32, kernel_size=[1, 2, 2], stride=[1, 2, 2], bias=False)
)
(up1): ModuleList(
(0): ConvTranspose3d(320, 320, kernel_size=[2, 2, 2], stride=[2, 2, 2], bias=False)
(1): ConvTranspose3d(320, 256, kernel_size=[2, 2, 2], stride=[2, 2, 2], bias=False)
(2): ConvTranspose3d(256, 128, kernel_size=[1, 2, 2], stride=[1, 2, 2], bias=False)
(3): ConvTranspose3d(128, 64, kernel_size=[1, 2, 2], stride=[1, 2, 2], bias=False)
(4): ConvTranspose3d(64, 32, kernel_size=[1, 2, 2], stride=[1, 2, 2], bias=False)
)
(up2): ModuleList(
(0): ConvTranspose3d(320, 256, kernel_size=[2, 2, 2], stride=[2, 2, 2], bias=False)
(1): ConvTranspose3d(256, 128, kernel_size=[1, 2, 2], stride=[1, 2, 2], bias=False)
(2): ConvTranspose3d(128, 64, kernel_size=[1, 2, 2], stride=[1, 2, 2], bias=False)
(3): ConvTranspose3d(64, 32, kernel_size=[1, 2, 2], stride=[1, 2, 2], bias=False)
)
(up3): ModuleList(
(0): ConvTranspose3d(256, 128, kernel_size=[1, 2, 2], stride=[1, 2, 2], bias=False)
(1): ConvTranspose3d(128, 64, kernel_size=[1, 2, 2], stride=[1, 2, 2], bias=False)
(2): ConvTranspose3d(64, 32, kernel_size=[1, 2, 2], stride=[1, 2, 2], bias=False)
)
(up4): ModuleList(
(0): ConvTranspose3d(128, 64, kernel_size=[1, 2, 2], stride=[1, 2, 2], bias=False)
(1): ConvTranspose3d(64, 32, kernel_size=[1, 2, 2], stride=[1, 2, 2], bias=False)
)
(seg_outputs): ModuleList(
(0): Conv3d(32, 2, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(1): Conv3d(32, 2, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(2): Conv3d(32, 2, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(3): Conv3d(32, 2, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
(4): Conv3d(32, 2, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False)
)
)
14:21:08.713013:

14:21:08.713932:
epoch: 0
Traceback (most recent call last):
File "/python3.7.5/bin/nnUNet_train", line 11, in
load_entry_point('nnunet', 'console_scripts', 'nnUNet_train')()
File "/UnetPlusPlus/UNetPlusPlus-master/UNetPlusPlus-master/pytorch/nnunet/run/run_training.py", line 148, in main
trainer.run_training()
File "/UnetPlusPlus/UNetPlusPlus-master/UNetPlusPlus-master/pytorch/nnunet/training/network_training/nnUNetPlusPlusTrainerV2.py", line 424, in run_training
ret = super().run_training()
File "/UnetPlusPlus/UNetPlusPlus-master/UNetPlusPlus-master/pytorch/nnunet/training/network_training/nnUNetTrainer.py", line 316, in run_training
super(nnUNetTrainer, self).run_training()
File "/UnetPlusPlus/UNetPlusPlus-master/UNetPlusPlus-master/pytorch/nnunet/training/network_training/network_trainer.py", line 491, in run_training
l = self.run_iteration(self.tr_gen, True)
File "/UnetPlusPlus/UNetPlusPlus-master/UNetPlusPlus-master/pytorch/nnunet/training/network_training/nnUNetPlusPlusTrainerV2.py", line 242, in run_iteration
output = self.network(data)
File "/python3.7.5/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/UnetPlusPlus/UNetPlusPlus-master/UNetPlusPlus-master/pytorch/nnunet/network_architecture/generic_UNetPlusPlus.py", line 409, in forward
x0_1 = self.loc4[0](torch.cat([x0_0, self.up40], 1))
File "/python3.7.5/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/python3.7.5/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 1066, in forward
output_padding, self.groups, self.dilation)
RuntimeError: Given transposed=1, weight of size [128, 64, 1, 2, 2], expected input[2, 64, 16, 160, 160] to have 128 channels, but got 64 channels instead
Exception in thread Thread-4:
Traceback (most recent call last):
File "/python3.7.5/lib/python3.7/threading.py", line 926, in _bootstrap_inner
self.run()
File "/python3.7.5/lib/python3.7/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/UnetPlusPlus/UNetPlusPlus-master/UNetPlusPlus-master/pytorch/batchgenerators-master/batchgenerators/dataloading/multi_threaded_augmenter.py", line 99, in results_loop
raise RuntimeError("Someone died. Better end this madness. This is not the actual error message! Look "
RuntimeError: Someone died. Better end this madness. This is not the actual error message! Look further up your stdout to see what caused the error. Please also check whether your RAM was full

@zhouwang0924
Copy link

hey,bro, I have the same problem. Have you solved it?

@MenxLi
Copy link

MenxLi commented Jul 24, 2021

I have the same problem

@MenxLi
Copy link

MenxLi commented Jul 25, 2021

I have the same problem

I found a possible solution to this,
In my case (also, I think in your case), The error arose because the plan for this task includes 6 pooling stages and 7 convolutional stages. However, the UNetPlusPlus implementation seems to only support 5 pooling stages and 6 convolutional stages because it only has 5 up stages, it will cause mismatch when indexing.

The possible solution I did was to use a custom planner derived from ExperimentPlanner3D_v2 and set the self.unet_max_numpool = 5 while calling nnUNet_plan_and_preprocess
I don't know if this is a generic solution and whether it will impair the model performance,

Please let me know if you guys found a better solution.

@kirillmeisser
Copy link

I also have the same problem @MenxLi would you mind giving a more detailed explanation of how you fixed it? I am trying to follow your steps but I get the following error "RuntimeError: Could not find the Planner class MyExperimentPlanner_v21. Make sure it is located somewhere in nnunet.experiment_planning". I have made my custom planner like so:

from copy import deepcopy
import numpy as np
from nnunet.experiment_planning.experiment_planner_baseline_3DUNet_v21 import \
    ExperimentPlanner3D_v21
from nnunet.experiment_planning.common_utils import get_pool_and_conv_props
from nnunet.paths import *
from nnunet.network_architecture.generic_modular_residual_UNet import FabiansUNet


class MyExperimentPlanner_v21(ExperimentPlanner3D_v21):
    def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
        super(ExperimentPlanner3DFabiansResUNet_v21, self).__init__(folder_with_cropped_data, preprocessed_output_folder)
        self.unet_max_numpool = 5

This I have then saved in the experiment_planning folder. If you can help me I would greatly appreciate your help!

@kirillmeisser
Copy link

In the end I have solved the issue by using smaller resolution images. Using smaller images the algorithm will determine to use fewer pooling layers and thus the error goes away.

@MenxLi
Copy link

MenxLi commented Aug 18, 2021

In the end I have solved the issue by using smaller resolution images. Using smaller images the algorithm will determine to use fewer pooling layers and thus the error goes away.

It's great to hear that it works, as long as you can have fewer pooling layers, it will work.
The new planner class should be placed in nnunet/experiment_planning folder, did you install the nnunet using pip install with -e option (experiment mode)? only if you install it with experiment mode, the packages will remain in place and not be copied to site-packages folder, so that your changes can be found by the interpreter.

@kirillmeisser
Copy link

Yes you are right I probably haven't installed it with the -e option, I didn't even know about it ahah :)

@web-girlfriend
Copy link

new planner class should be placed in nnunet/experiment_planning folder

could you please tell me how to derive a custom planner from ExperimentPlanner3D_v2, I need help! I would appreciate it if you could tell me

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants