-
Notifications
You must be signed in to change notification settings - Fork 232
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The sequence parallel is open when I don't use it. #669
Comments
Hi @amulil ! |
@HIT-cwh Thanks for your tip, I didn't install flash-attn. After I install it, There is no error info. But the command I run shouldn't use the sequence parrellel. Its sequence_parallel_world_size is changed to 4.In fact, it should be 1. |
I ran into the same problem. Do you have a solution for it, bro? |
Currently, there is a bug arising from sequence parallel when training without deepspeed. This pr will fix the bug and will be integrated soon. We apologize for any inconvenience this may have caused. In addition, we recommand to use DeepSpeed to optimize the training phase by |
version
05/09 21:16:21 - mmengine - INFO - 0.1.18
how to reproduce
CUDA_VISIBLE_DEVICES=4,5,6,7 NPROC_PER_NODE=4 xtuner train qwen1_5_0_5b_chat_qlora_alpaca_e3
log
I only change the batch_size to 4 in config file
qwen1_5_0_5b_chat_qlora_alpaca_e3
.But sequence_parallel_world_size is changed to 4.The text was updated successfully, but these errors were encountered: