Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] Finetune Stage 2 Model #15

Open
Xuefei98 opened this issue Apr 18, 2024 · 3 comments
Open

[Question] Finetune Stage 2 Model #15

Xuefei98 opened this issue Apr 18, 2024 · 3 comments

Comments

@Xuefei98
Copy link

Xuefei98 commented Apr 18, 2024

Question

First of all, great work and thank you so much for open-source it! I wonder if the stage 2 model(referred as ViP-LLaVA-Base) has been released anywhere? Maybe mucai/vip-llava-13b-pretrain? I am trying to finetune the stage 2 model using custom GPT instruction data. I am looking at scripts/finetune_stage3.sh and wonder if that's the correct script? But model used in the script is ./checkpoints/vip-llava-$model_size-stage2-ft and I dont really see it anywhere. Thank you!

@mu-cai
Copy link
Collaborator

mu-cai commented Apr 19, 2024

Hi xuefei,

Thanks for bringing this point! I just uploaded the 7B stage 2 model:
https://huggingface.co/mucai/vip-llava-7b-base

Mu

@Xuefei98
Copy link
Author

Hi Mu,

Thank you so much for getting back to me! Is it possible for you to also share the 13B model? I would like to fine tune both 7B and 13B model and compare the performance for my experiments.

Xuefei

@Xuefei98 Xuefei98 reopened this Apr 19, 2024
@mu-cai
Copy link
Collaborator

mu-cai commented Apr 22, 2024

You can now find 13b base model here! https://huggingface.co/mucai/vip-llava-13b-base

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants