Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

3D segmentation - min tile size in fiji #63

Open
ayallavi opened this issue Apr 14, 2020 · 5 comments
Open

3D segmentation - min tile size in fiji #63

ayallavi opened this issue Apr 14, 2020 · 5 comments

Comments

@ayallavi
Copy link

Hello,
I'm trying to segment/count cells in 3D fluorescent confocal Z-stacks (1024x1024x19; XxYxZ) that have been manually annotated with Nikon imaging software. I set up frontend on a PC and backend on a standalone Ubuntu 18.04. I have a few specific questions:

  1. The estimated memory use is 6136MB however, I cannot reduce Z pixel size below 92 in the U-Net Finetuning Fiji plugin. Since I'm using I'm using GTX 1060 6GB, this is a limiting factor. As far as I understand, Z = 30 should suffice, right?
  2. Since the images are annotated and I'm basically interested in cell counting - can I replace segments with small spheres around the centroid? If so, what would be the best size?
    Thank you in advance!
@ThorstenFalk
Copy link
Collaborator

ThorstenFalk commented Apr 14, 2020

  1. The current 3D model assumes an anisotropic voxel shape of 0.5x0.5x1µm but its target are thick tissues and therefore it uses 90vx context in z-direction (being the reason for the lower limit). Since you only have 19 slices I assume you analyze more or less cellular mono-layers and don't need a lot of context in z-direction. I'd go for a model with only three resolution levels in this case (assuming your cells don't exceed a diameter of 45µm).
  2. Yes, you can (if your cells are mainly convex). You basically describe how the detection mode of the plugin works. The plugin uses spheres of 3px radius.

@ayallavi
Copy link
Author

First, thank you very much for the quick response!

I'm imaging 50um slices with 20x Z-stack so my resolution is 0.62 x 0.62 x 2 (in um; X Y Z). In the images, I do see 2-4 layers of cell nuclei (depending on the slicing angle and the brain region). I do apologize but I'm not sure what you mean by "model with only three resolution levels". Do you mean I should choose some middle slice (or several) and train the model for 2D?
Additionally, I did try to downsampling to 512x512 or to 8-bit but neither seemed to affect the estimated GPU memory consumption.

@ThorstenFalk
Copy link
Collaborator

ThorstenFalk commented Apr 15, 2020

My idea was to train an entirely new model with network architecture adapted to your kind of data. Since I now know a little more, I change my suggestion: Train a four-layer network as before, but on the two highest resolution levels use 2-D kernels and 2-D max-pooling only, so that your voxels are approximately isotropic on the third and fourth resolution level.

For this create a new modeldef.h5 file using "Plugins->U-Net->Utilities->Create New Model" and set it up similar to
NewModel

Then use the "Finetuning" operation to train the new model from scratch. Simply leave the "weights filename" field empty and confirm that you want to train from scratch if the plugin asks for confirmation. If all your stacks have the exactly same resolution, you might want to click "From image" in the element size selection of the Finetuning dialog, to avoid extra image resampling.

@ayallavi
Copy link
Author

Thank you very much for the tailor-made model! Now I can use 316 x 316 x 44 under at expected 5181MB GPU memory consumption (with cuDNN) which should work well with a 6GB Graphics card.
I will start with 20K iterations with a validation interval of 20 and will update on the progress.

@ayallavi
Copy link
Author

ayallavi commented May 1, 2020

Quick update: First of all the model is running through and I was able to complete a full run from scratch. The model was trained on 4 images and validated with 1 image.
Screenshot (13)

The network was trained to identify 3 classes, where the green is a co-localization of the black and the red. However, after 10K iterations, the green class could not be identified and the black class reached segmentation of less the 0.2. Also - it seems like segmentation reached a plateau. Note that the prevalence of the green cells is ~1% from the red and the black cells.

What would be the best way to move forward and improve detection/segmentation? add more images (with green class) or train separately for each class? More iterations (how many?)? What would be an expected improvement rate?

And a general question: does the number of tiles matter to learning? For this example, the validation image was tiled to 36 sub-images.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants