Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve segmentation step to get single label and single marker for each object #16

Open
1 of 2 tasks
ramsrigouthamg opened this issue Dec 3, 2023 · 5 comments
Open
1 of 2 tasks
Assignees
Labels
enhancement New feature or request

Comments

@ramsrigouthamg
Copy link

Search before asking

  • I have searched the Multimodal Maestro issues and found no similar feature requests.

Description

I am trying to achieve segmentation of objects such that each object has only one label and clear segmentation boundary defined.
At the moment in the post-processing refiner step of the tutorial (Colab) notebook in the repo, the hard-coded 0.02 value isn’t perfect for most images and misses correct segmentation clusters. So misses most individual objects or they are clustered with the background.

The refiner function does 4 different tasks at once (hole filling, minimum area , max …) Good to isolate or please suggest a better way to isolate individual objects and their segmentation pixels perfectly.

Use case

No response

Additional

No response

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!
@ramsrigouthamg ramsrigouthamg added the enhancement New feature or request label Dec 3, 2023
@SkalskiP
Copy link
Collaborator

SkalskiP commented Dec 4, 2023

Hi, @ramsrigouthamg! 👋🏻 Thank you for your interest in our project. You can already run the following functions independently:

@SkalskiP SkalskiP self-assigned this Dec 4, 2023
@ramsrigouthamg
Copy link
Author

Thanks @SkalskiP
Is there support/potential to include support for https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once in this library?
Basically, I was trying to get better segmentation masks instead of traditional SAM and merging which is erroneous.

@SkalskiP
Copy link
Collaborator

SkalskiP commented Dec 4, 2023

I`d love to. Set-of-Mark Prompting Unleashes Extraordinary Visual Grounding in GPT-4V paper used SEEM as well.

But I want Maestro to me easy to install. I don't want to force people to go through this installation process when installing Maestro. So, if we would integrate it, we need SEEM version that is easily installable.

@ramsrigouthamg
Copy link
Author

Understood thanks for the quick response!

@SkalskiP
Copy link
Collaborator

SkalskiP commented Dec 4, 2023

Alternatively, we can make it pluggable so that if someone installs it and goes through that pain they could use it. Do you have experience with SEEM?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants