We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hello,我最近也在研究megatron,看到了Megatron-deepspeed,项目里并没有实现llama模型,但是提供了pretrain llama架构的sh脚本,请问下作者您的这个项目和那个有什么区别呢? 另外请教下为什么要把模型转换成megatron的格式呢?我直接用的huggingface的bin模型,好像运行成功了 感谢代码😊😊😊
The text was updated successfully, but these errors were encountered:
https://github.com/alibaba/Megatron-LLaMA/blob/main/README_zh.md#2-megatron-llama%E4%B8%ADoverlappeddistributedoptimizer%E7%AE%80%E4%BB%8B 这里介绍了我们和deepspeed的通信方式的区别,目前的方式通信效率更高
Sorry, something went wrong.
No branches or pull requests
hello,我最近也在研究megatron,看到了Megatron-deepspeed,项目里并没有实现llama模型,但是提供了pretrain llama架构的sh脚本,请问下作者您的这个项目和那个有什么区别呢?
另外请教下为什么要把模型转换成megatron的格式呢?我直接用的huggingface的bin模型,好像运行成功了
感谢代码😊😊😊
The text was updated successfully, but these errors were encountered: