You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched the Autodistill issues and found no similar feature requests.
Description
I couldn't find any information on support for finetuning the foundation model before distilling. Sorry if I missed it!
I think this is an extremely important feature since it can really help in cases where the foundation model performs very badly unless it gets to see a hundred or so examples of the unseen domain.
It will also allow the user to iterate on improving the foundation model with corrected data and gradually distilling a better and better small model.
Use case
E.g. I have strange-looking images from point cloud renders that are close to what the foundation model should be able to handle but the segmentations are bad enough that it's pointless to distill a smaller model until the foundation model gives better results.
Additional
I will try and see if I can do this manually by getting grads through an inference interface.
Are you willing to submit a PR?
Yes I'd like to help by submitting a PR!
The text was updated successfully, but these errors were encountered:
+1.
Unseen domain + hierarchical objects are a big challenge for current foundation model , e.g., SAM, DINO etc.
AFAIF, not only in this repo, the finetuning for such tasks are not well-studied for now.
Thank you for filing this Issue! We have not yet thought about fine-tuning foundation models as part of autodistill. I have taken a note of this idea and will consider how we can look at fine-tuning models in the future.
We can of course finetune models in our own codebases too if you think this is outside the intended scope
I recommend having a look at PEFT if you haven't seen it https://github.com/huggingface/peft :) It can be used as a utility library for lightweight finetuning
+1. Unseen domain + hierarchical objects are a big challenge for current foundation model , e.g., SAM, DINO etc. AFAIF, not only in this repo, the finetuning for such tasks are not well-studied for now.
Search before asking
Description
I couldn't find any information on support for finetuning the foundation model before distilling. Sorry if I missed it!
I think this is an extremely important feature since it can really help in cases where the foundation model performs very badly unless it gets to see a hundred or so examples of the unseen domain.
It will also allow the user to iterate on improving the foundation model with corrected data and gradually distilling a better and better small model.
Use case
E.g. I have strange-looking images from point cloud renders that are close to what the foundation model should be able to handle but the segmentations are bad enough that it's pointless to distill a smaller model until the foundation model gives better results.
Additional
I will try and see if I can do this manually by getting grads through an inference interface.
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: