You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Predicting next stage 3d_cascade_fullres failed for case 0011 because the preprocessed file is missing! Run the preprocessing for this configuration first!
#2186
Open
chenney0830 opened this issue
May 15, 2024
· 0 comments
I encountered this issue during the validation of the 3d_lowres training. I have redone the preprocessing many times, but the problem still persists.
Do you have any idea how to solve it?
2024-05-15 18:34:11.890179: This split has 336 training and 84 validation cases.
2024-05-15 18:34:11.890530: predicting 0010
2024-05-15 18:34:11.898345: 0010, shape torch.Size([1, 142, 168, 221]), rank 0
2024-05-15 18:34:20.825013: Predicting next stage 3d_cascade_fullres failed for case 0010 because the preprocessed file is missing! Run the preprocessing for this configuration first!
2024-05-15 18:34:20.825432: predicting 0011
2024-05-15 18:34:20.837446: 0011, shape torch.Size([1, 133, 176, 239]), rank 0
2024-05-15 18:34:28.397316: Predicting next stage 3d_cascade_fullres failed for case 0011 because the preprocessed file is missing! Run the preprocessing for this configuration first!
2024-05-15 18:34:28.397523: predicting 0018
2024-05-15 18:34:28.407552: 0018, shape torch.Size([1, 136, 174, 210]), rank 0
2024-05-15 18:34:36.233374: Predicting next stage 3d_cascade_fullres failed for case 0018 because the preprocessed file is missing! Run the preprocessing for this configuration first!
2024-05-15 18:34:36.235347: predicting 0019
2024-05-15 18:34:36.246605: 0019, shape torch.Size([1, 132, 177, 208]), rank 0
2024-05-15 18:34:44.031341: Predicting next stage 3d_cascade_fullres failed for case 0019 because the preprocessed file is missing! Run the preprocessing for this configuration first!
2024-05-15 18:34:44.033534: predicting 002
2024-05-15 18:34:44.046287: 002, shape torch.Size([1, 140, 195, 210]), rank 0
2024-05-15 18:34:54.457248: Predicting next stage 3d_cascade_fullres failed for case 002 because the preprocessed file is missing! Run the preprocessing for this configuration first!
Traceback (most recent call last):
File "/home/chenney/anaconda3/envs/nnUNet_test/bin/nnUNetv2_train", line 8, in
sys.exit(run_training_entry())
File "/home/chenney/nnUNet_test/nnUNet/nnUNet/nnunetv2/run/run_training.py", line 275, in run_training_entry
run_training(args.dataset_name_or_id, args.configuration, args.fold, args.tr, args.p, args.pretrained_weights,
File "/home/chenney/nnUNet_test/nnUNet/nnUNet/nnunetv2/run/run_training.py", line 215, in run_training
nnunet_trainer.perform_actual_validation(export_validation_probabilities)
File "/home/chenney/nnUNet_test/nnUNet/nnUNet/nnunetv2/training/nnUNetTrainer/nnUNetTrainer.py", line 1244, in perform_actual_validation
proceed = not check_workers_alive_and_busy(segmentation_export_pool, worker_list, results,
File "/home/chenney/nnUNet_test/nnUNet/nnUNet/nnunetv2/utilities/file_path_utilities.py", line 103, in check_workers_alive_and_busy
raise RuntimeError('Some background workers are no longer alive')
RuntimeError: Some background workers are no longer alive
The text was updated successfully, but these errors were encountered:
Hi Fabian,
I encountered this issue during the validation of the 3d_lowres training. I have redone the preprocessing many times, but the problem still persists.
Do you have any idea how to solve it?
2024-05-15 18:34:11.890179: This split has 336 training and 84 validation cases.
2024-05-15 18:34:11.890530: predicting 0010
2024-05-15 18:34:11.898345: 0010, shape torch.Size([1, 142, 168, 221]), rank 0
2024-05-15 18:34:20.825013: Predicting next stage 3d_cascade_fullres failed for case 0010 because the preprocessed file is missing! Run the preprocessing for this configuration first!
2024-05-15 18:34:20.825432: predicting 0011
2024-05-15 18:34:20.837446: 0011, shape torch.Size([1, 133, 176, 239]), rank 0
2024-05-15 18:34:28.397316: Predicting next stage 3d_cascade_fullres failed for case 0011 because the preprocessed file is missing! Run the preprocessing for this configuration first!
2024-05-15 18:34:28.397523: predicting 0018
2024-05-15 18:34:28.407552: 0018, shape torch.Size([1, 136, 174, 210]), rank 0
2024-05-15 18:34:36.233374: Predicting next stage 3d_cascade_fullres failed for case 0018 because the preprocessed file is missing! Run the preprocessing for this configuration first!
2024-05-15 18:34:36.235347: predicting 0019
2024-05-15 18:34:36.246605: 0019, shape torch.Size([1, 132, 177, 208]), rank 0
2024-05-15 18:34:44.031341: Predicting next stage 3d_cascade_fullres failed for case 0019 because the preprocessed file is missing! Run the preprocessing for this configuration first!
2024-05-15 18:34:44.033534: predicting 002
2024-05-15 18:34:44.046287: 002, shape torch.Size([1, 140, 195, 210]), rank 0
2024-05-15 18:34:54.457248: Predicting next stage 3d_cascade_fullres failed for case 002 because the preprocessed file is missing! Run the preprocessing for this configuration first!
Traceback (most recent call last):
File "/home/chenney/anaconda3/envs/nnUNet_test/bin/nnUNetv2_train", line 8, in
sys.exit(run_training_entry())
File "/home/chenney/nnUNet_test/nnUNet/nnUNet/nnunetv2/run/run_training.py", line 275, in run_training_entry
run_training(args.dataset_name_or_id, args.configuration, args.fold, args.tr, args.p, args.pretrained_weights,
File "/home/chenney/nnUNet_test/nnUNet/nnUNet/nnunetv2/run/run_training.py", line 215, in run_training
nnunet_trainer.perform_actual_validation(export_validation_probabilities)
File "/home/chenney/nnUNet_test/nnUNet/nnUNet/nnunetv2/training/nnUNetTrainer/nnUNetTrainer.py", line 1244, in perform_actual_validation
proceed = not check_workers_alive_and_busy(segmentation_export_pool, worker_list, results,
File "/home/chenney/nnUNet_test/nnUNet/nnUNet/nnunetv2/utilities/file_path_utilities.py", line 103, in check_workers_alive_and_busy
raise RuntimeError('Some background workers are no longer alive')
RuntimeError: Some background workers are no longer alive
The text was updated successfully, but these errors were encountered: