You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Hello. I encounter an error while running the snippet for the predictions of cells. Specifically, I have images with a white background and a region of interest standing out from the background (I use a white background because erroneous predictions of stardist due to the contours' contrast are always on the outside part, hence easy to remove with code). Strangely enough, I don't get this error with my previous image datasets, although they are practically the same size and bit-size too.
To reproduce
I am using the snippet provided by stardist for predictions based on sb's trained model. I am also pasting it here but please note that it's slightly changed with respect to the paths, directories etc. But it's practically the same with the one provided by the site.
importosfrom __future__ importprint_function, unicode_literals, absolute_import, divisionimportsysimportnumpyasnpimportmatplotlibmatplotlib.rcParams["image.interpolation"] ='none'importmatplotlib.pyplotasplt%matplotlibinline%configInlineBackend.figure_format='retina'fromglobimportglobfromtifffileimportimreadfromcsbdeep.utilsimportPath, normalizefromcsbdeep.ioimportsave_tiff_imagej_compatiblefromstardistimportrandom_label_cmap, _draw_polygons, export_imagej_roisfromstardist.modelsimportStarDist2Dnp.random.seed(6)
lbl_cmap=random_label_cmap()
os.chdir(r'C:\Users\angdid\Desktop')
X=sorted(glob('LA whitee/*.tiff'))
X=list(map(imread,X))
n_channel=1ifX[0].ndim==2elseX[0].shape[-1]
axis_norm= (0,1) # normalize channels independentlyifn_channel>1:
print("Normalizing image channels %s."% ('jointly'ifaxis_normisNoneor2inaxis_normelse'independently'))
ifTrue:
fig, ax=plt.subplots(9,8, figsize=(32,32)) # 5,5 to create a template of 25. For more images change itfori,(a,x) inenumerate(zip(ax.flat, X)):
a.imshow(x,cmap='gray')
a.set_title(i, fontsize=50)
[a.axis('off') forainax.flat]
plt.tight_layout()
None;
demo_model=Falseifdemo_model:
print (
"NOTE: This is loading a previously trained demo model!\n"" Please set the variable 'demo_model = False' to load your own trained model.",
file=sys.stderr, flush=True
)
model=StarDist2D.from_pretrained('2D_demo')
else:
model=StarDist2D(None, name='tdTom', basedir='models')
None;
fig, ax=plt.subplots(9,8, figsize=(32,32)) # 5,5 to create a template of 25. For more images change itos.chdir(r'C:\Users\angdid\Desktop\LA whitee')
images_names=sorted(glob('*.tiff'))
os.chdir(r'C:\Users\angdid\Desktop\Results')
forindex, (a,x) inenumerate (zip (ax.flat,X)):
img=normalize(X[index], 1,99.8, axis=axis_norm)
labels, details=model.predict_instances(img, prob_thresh=0.4) # also try , prob_thresh=0.4a.imshow(labels,cmap='gray')
a.set_title(index, fontsize=50)
save_tiff_imagej_compatible(f'{images_names[index]}.tiff', img, axes='YX')
save_tiff_imagej_compatible(f'{images_names[index]}-labels.tiff', labels, axes='YX')
export_imagej_rois(f'{images_names[index]}.zip', details['coord'])
Expected behavior
Here I would normally expect to get the output of the predictions, that is, a 'labels' image and a zip file with .roi files inside. My suspicion (although solely intuitive) is that the white background is somehow too much compared to the region standing out. Because previous datasets with larger regions in the same background worked fine.
Data and screenshots
Below two different images. The dataset with the images like the one on the left (larger area) works fine. The dataset with images like the one on the right yields the error.
Environment (please complete the following information):
StarDist version 0.8.5
CSBDeep version 0.7.4
TensorFlow version 2.14.0
OS: Windows
GPU memory (if applicable): 16 GB
The text was updated successfully, but these errors were encountered:
The error indicates to me that the coordinates of the (to be exported) objects are not compatible with ImageJ's ROI export format, which uses a short integer data type (-32768...+32767) for the object coordinates.
I'm assuming your image is bigger than 32767 pixels in some dimension and that's the reason the ROI export fails.
Describe the bug
Hello. I encounter an error while running the snippet for the predictions of cells. Specifically, I have images with a white background and a region of interest standing out from the background (I use a white background because erroneous predictions of stardist due to the contours' contrast are always on the outside part, hence easy to remove with code). Strangely enough, I don't get this error with my previous image datasets, although they are practically the same size and bit-size too.
To reproduce
I am using the snippet provided by stardist for predictions based on sb's trained model. I am also pasting it here but please note that it's slightly changed with respect to the paths, directories etc. But it's practically the same with the one provided by the site.
And the error ...
The model that I am using is one that I trained (uploaded in WeTransfer).
Expected behavior
Here I would normally expect to get the output of the predictions, that is, a 'labels' image and a zip file with .roi files inside. My suspicion (although solely intuitive) is that the white background is somehow too much compared to the region standing out. Because previous datasets with larger regions in the same background worked fine.
Data and screenshots
Below two different images. The dataset with the images like the one on the left (larger area) works fine. The dataset with images like the one on the right yields the error.
Environment (please complete the following information):
The text was updated successfully, but these errors were encountered: