Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lightweight Fine-tunning or few-shot learning for limited labeled data #536

Open
Septimus2024 opened this issue Feb 29, 2024 · 1 comment
Assignees
Labels
enhancement New feature or request

Comments

@Septimus2024
Copy link

Feature request

After semi-supervised pretraining, can we do light-weighted fine-tunning or few-shot learning instead of classification?

What is the expected behavior?
Instead of fine-tuning on decent amount of labeled data. Is it possible to do some light-weight fine-tuning (e.g., fine-tunning on less than 100 labeled data) or doing few-shot learning instead of classification?

What is motivation or use case for adding/changing the behavior?
Only have limited labeled data to fine-tune model.

How should this be implemented in your opinion?
For few-shot learning, maybe change the loss function.

Are you willing to work on this yourself?
yes

@Septimus2024 Septimus2024 added the enhancement New feature or request label Feb 29, 2024
@Optimox
Copy link
Collaborator

Optimox commented Mar 25, 2024

feel free to open a PR with a concrete proposition.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants