Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The Batch size and training epoch not metch with paper #103

Open
Sumutan opened this issue May 29, 2023 · 0 comments
Open

The Batch size and training epoch not metch with paper #103

Sumutan opened this issue May 29, 2023 · 0 comments

Comments

@Sumutan
Copy link

Sumutan commented May 29, 2023

Thank you for your open source project!
The script for the finetune part corresponding to the 1600 pretrain in your provided scripts is different from the configuration given in the appendix of the paper:
1.The total batchsize in 512 (8 batch size * 8 node * 8 GPU)in paper,but 256((2 batch size * 2 num_sample * 8 node * 8 GPU))in script.
2.The training epoch was reduced from 75 rounds in the paper to 35 rounds.
Would it be possible to achieve similar training results with this difference?
image
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant