You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To reproduce
from the Jupyter notebook 2_training.ipynb
#%% Configuration# A `StarDist3D` model is specified via a `Config3D` object.extents=calculate_extents(Y)
anisotropy=tuple(np.max(extents) /extents)
print('empirical anisotropy of labeled objects = %s'%str(anisotropy))
# 96 is a good default choice (see 1_data.ipynb)n_rays=96# Use OpenCL-based computations for data generator during training (requires 'gputools')use_gpu=Trueandgputools_available()
# Predict on subsampled grid for increased efficiency and larger field of viewgrid=tuple(1ifa>1.5else2forainanisotropy)
# Use rays on a Fibonacci lattice adjusted for measured anisotropy of the training datarays=Rays_GoldenSpiral(n_rays, anisotropy=anisotropy)
#backbone 'unet' or 'resnet'backbone='unet'conf=Config3D (
rays=rays,
grid=grid,
anisotropy=anisotropy,
use_gpu=use_gpu,
n_channel_in=n_channel,
# adjust for your data below (make patch size as large as possible)train_patch_size= (48,96,96),
train_batch_size=2,
backbone=backbone
)
print(conf)
vars(conf)
ifuse_gpu:
fromcsbdeep.utils.tfimportlimit_gpu_memory# adjust as necessary: limit GPU memory to be used by TensorFlow to leave some to OpenCL-based computations#limit_gpu_memory(0.8)# alternatively, try this:limit_gpu_memory(None, allow_growth=True)
# **Note:** The trained `StarDist3D` model will *not* predict completed shapes for partially visible objects at the image boundary.model=StarDist3D(conf, name='stardist', basedir='models')
# Check if the neural network has a large enough field of view to see up to the boundary of most objects.median_size=calculate_extents(Y, np.median)
fov=np.array(model._axes_tile_overlap('ZYX'))
print(f"median object size: {median_size}")
print(f"network field of view : {fov}")
ifany(median_size>fov):
print("WARNING: median object size larger than field of view of the neural network.")
#%%Train the modelgpus=tf.config.list_logical_devices('GPU')
forgpuingpus:
withtf.device(gpu.name):
model.train(X_trn, Y_trn, validation_data=(X_val,Y_val), augmenter=augmenter,
epochs=10)
I try to train a 3D model with my experimental 3D images. It works with 'CPU' but fails with 'GPU'.
With 'GPU', model.train run only 1 step of the first epoch (very slowly) and it freezes the console.
you're training with relatively big patch sizes, hence it could be that it freezes because there's no more memory available. Try setting train_batch_size = 1 and see if the problem goes away.
A strange issue is that it works (but slowly) when backbone = 'resnet' instead of 'unet'
It could be that the UNet uses slightly more memory and therefore causes the problem.
Describe the bug
I try to train a 3D model with my experimental 3D images.
It works with 'CPU' but fails with 'GPU'.
With 'GPU', model.train run only 1 step of the first epoch (very slowly) and it freezes the console.
A strange issue is that it works (but slowly) when backbone = 'resnet' instead of 'unet'
Note that I need to
if I want to train with 'CPU'
To reproduce
from the Jupyter notebook 2_training.ipynb
Expected behavior
That it works ;o)
Data and screenshots
Environment (please complete the following information):
The text was updated successfully, but these errors were encountered: