Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] a separate quantification algorithm or ability to input a single x-y coordinate as the position of a cell (instead of a mask) #915

Open
tanpancodes opened this issue Apr 11, 2024 · 3 comments
Assignees
Labels
enhancement New feature or request

Comments

@tanpancodes
Copy link

Is your feature request related to a problem? Please describe.
I am trying to use some existing data to create my own model (using human-in-the-loop feature), but the data we have from our lab only has the location of cells marked with an X.

Our existing data does not have each cell carved out as its own region with a given mask/ROI like the current cellpose training feature requires. Combing through 4-5 years worth of lab data again to create a training set is impractical.

Describe the solution you'd like
Ability to input a single set of x,y coordinates as the location of a cell and be able to use this to generate a cell count for an image or maybe use this data to generate masks of cells in the same image.

Describe alternatives you've considered
I have tried importing the locations of x-y coordinates that we have for various images and trying to generate masks by creating a circle centered around the point that has been marked. However, given that cells having varying morphologies with protrusions and occasionally overlap, this has not worked so well.

Additional context
Add any other context or screenshots about the feature request here.

image
example of image where the number of cells is quantified with a red 'X' instead of with masks

Another image where I use cyto3 to try to use segment cells but it is not very accurate (used red channel to segment)
image

@tanpancodes tanpancodes added the enhancement New feature or request label Apr 11, 2024
@ashishUthama
Copy link

This might be outside cellpose's scope.
Have you considered SAM (Segment Anything Model) and its derivatives for medical images? They work with annotations like yours to do a good job segmenting a 'thing' around it.

@tanpancodes
Copy link
Author

This might be outside cellpose's scope. Have you considered SAM (Segment Anything Model) and its derivatives for medical images? They work with annotations like yours to do a good job segmenting a 'thing' around it.

Apologies if my question is unclear. I am not trying to generate the yellow lines (big segmentation). I am trying to generate small masks for each individual cell. (number of masks I expect to generate in this image is around the number of red 'X's that are in the labelled image)

I would presume this is within the purview of CellPose since I am indeed trying to segment an image of cells; I just don't have the training data in the right format and was wondering if there was a better way to convert my training data into usable input data.

@mrariden
Copy link
Collaborator

@tanpancodes cellpose requires pixel-wise instance labeled cell masks to train a model. There is no way to determine which pixels belong to a cell from just a single centroid label. Unfortunately, your request is outside of the scope of cellpose because cellpose is a segmentation algorithm. In line with @ashishUthama suggestion, you can look for object detection algorithms for your purpose. Qupath and napari may have some solutions/plugins available

However, using the cellpose GUI with HITL training approach may be faster than you suspect and get the job done once you have an adequately finetuned model

@mrariden mrariden self-assigned this Apr 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants