Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fine-tune sgpt-bloom-7b1-msmarco oom #32

Open
wing7171 opened this issue May 23, 2023 · 1 comment
Open

fine-tune sgpt-bloom-7b1-msmarco oom #32

wing7171 opened this issue May 23, 2023 · 1 comment

Comments

@wing7171
Copy link

wing7171 commented May 23, 2023

Hi, I have problem in fine-tunning sgpt-bloom-7b1-msmarco because of oom error, could you please share how you do contrasive fine-tuning on bloom-7b1? (I think distributed training is needed, but I failed ..)

@Muennighoff
Copy link
Owner

Muennighoff commented May 23, 2023

The command i used is here: https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco
I ran on 8 A100 GPUs w/ 80GB I think

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants