Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I hope to use SAM's own prompter to input point and label for inference #4201

Open
GZ-YourZY opened this issue May 15, 2024 · 0 comments
Open
Labels
enhancement New feature or request

Comments

@GZ-YourZY
Copy link

GZ-YourZY commented May 15, 2024

Description

I wish to extend the multimodal library and the code in the Convlora example

To fine-tune SAM through ConvLoRA, I hope to use the prompter of SAM itself to input point and label for inference to see whether the model has obtained the ability of the corresponding task during fine-tuning, rather than obtaining a good prompt encoding. This question is actually It's very important. I hope the code can be expanded to implement a prompt segmentation function similar to SAM, not just an end-to-end model.

@GZ-YourZY GZ-YourZY added the enhancement New feature or request label May 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant