Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] BertLMHeadModel.from_pretrained hangs when using zero-3 / zero3-offload #5520

Open
XenonLamb opened this issue May 10, 2024 · 1 comment
Labels
bug Something isn't working training

Comments

@XenonLamb
Copy link

XenonLamb commented May 10, 2024

Describe the bug
I tried to run the LLaMA-VID (https://github.com/dvlab-research/LLaMA-VID/tree/main) model under zero-3, and during initialization of the model, when creating the model's text encoder, BertLMHeadModel.from_pretrained("bert-base-uncased") caused the training script to hang with NCCL timeout.

To Reproduce
I'm using torch==2.1.0 deepspeed==0.9.5 accelerate==0.30.0 transformers==4.39.2 with flash-attn installed.
The timeout occurs when calling BertLMHeadModel.from_pretrained("bert-base-uncased") (https://github.com/dvlab-research/LLaMA-VID/blob/main/llamavid/model/llamavid_arch.py#L214)

Expected behavior
The BertLMHeadModel should initialize normally from https://huggingface.co/google-bert/bert-base-uncased

ds_report output

` [2024-05-10 14:06:47,879] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect)

DeepSpeed C++/CUDA extension op report

NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.

JIT compiled ops requires ninja
ninja .................. [OKAY]

op name ................ installed .. compatible

[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
fused_adam ............. [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.1
[WARNING] using untested triton version (2.1.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]

DeepSpeed general environment info:
torch install path ............... ['/home/tiger/.local/lib/python3.9/site-packages/torch']
torch version .................... 2.1.0+cu121
deepspeed install path ........... ['/home/tiger/.local/lib/python3.9/site-packages/deepspeed']
deepspeed info ................... 0.9.5, unknown, unknown
torch cuda version ............... 12.1
torch hip version ................ None
nvcc version ..................... 12.1
deepspeed wheel compiled w. ...... torch 2.1, cuda 12.1 `

Screenshots
[E ProcessGroupNCCL.cpp:474] [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=2380, OpType=_ALLGATHER_BASE, NumelIn=73728, NumelOut=589824, Timeout(ms)=1800000) ran for 1800360 milliseconds before timing out. [E ProcessGroupNCCL.cpp:474] [Rank 5] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=2380, OpType=_ALLGATHER_BASE, NumelIn=96, NumelOut=768, Timeout(ms)=1800000) ran for 1800670 milliseconds before timing out. [E ProcessGroupNCCL.cpp:474] [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=2380, OpType=_ALLGATHER_BASE, NumelIn=73728, NumelOut=589824, Timeout(ms)=1800000) ran for 1800671 milliseconds before timing out. [E ProcessGroupNCCL.cpp:474] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=2380, OpType=_ALLGATHER_BASE, NumelIn=384, NumelOut=3072, Timeout(ms)=1800000) ran for 1800683 milliseconds before timing out. [E ProcessGroupNCCL.cpp:474] [Rank 6] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=2380, OpType=_ALLGATHER_BASE, NumelIn=96, NumelOut=768, Timeout(ms)=1800000) ran for 1800674 milliseconds before timing out. [E ProcessGroupNCCL.cpp:474] [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=2380, OpType=_ALLGATHER_BASE, NumelIn=384, NumelOut=3072, Timeout(ms)=1800000) ran for 1800686 milliseconds before timing out. [E ProcessGroupNCCL.cpp:474] [Rank 4] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=2380, OpType=_ALLGATHER_BASE, NumelIn=384, NumelOut=3072, Timeout(ms)=1800000) ran for 1800698 milliseconds before timing out. [E ProcessGroupNCCL.cpp:474] [Rank 7] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=2381, OpType=_ALLGATHER_BASE, NumelIn=96, NumelOut=768, Timeout(ms)=1800000) ran for 1800309 milliseconds before timing out. n193-020-206:32019:32316 [0] NCCL INFO [Service thread] Connection closed by localRank 0 n193-020-206:32019:32199 [0] NCCL INFO comm 0x70d69180 rank 0 nranks 8 cudaDev 0 busId 10000 - Abort COMPLETE [E ProcessGroupNCCL.cpp:488] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. [E ProcessGroupNCCL.cpp:494] To avoid data inconsistency, we are taking the entire process down. [E ProcessGroupNCCL.cpp:915] [Rank 0] NCCL watchdog thread terminated with exception: [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=2380, OpType=_ALLGATHER_BASE, NumelIn=73728, NumelOut=589824, Timeout(ms)=1800000) ran for 1800360 milliseconds before timing out. terminate called after throwing an instance of 'std::runtime_error' what(): [Rank 0] NCCL watchdog thread terminated with exception: [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=2380, OpType=_ALLGATHER_BASE, NumelIn=73728, NumelOut=589824, Timeout(ms)=1800000) ran for 1800360 milliseconds before timing out. n193-020-206:32023:32317 [4] NCCL INFO [Service thread] Connection closed by localRank 0 n193-020-206:32023:32194 [4] NCCL INFO comm 0x700d66a0 rank 4 nranks 8 cudaDev 4 busId 89000 - Abort COMPLETE [E ProcessGroupNCCL.cpp:488] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. [E ProcessGroupNCCL.cpp:494] To avoid data inconsistency, we are taking the entire process down. [E ProcessGroupNCCL.cpp:915] [Rank 4] NCCL watchdog thread terminated with exception: [Rank 4] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=2380, OpType=_ALLGATHER_BASE, NumelIn=384, NumelOut=3072, Timeout(ms)=1800000) ran for 1800698 milliseconds before timing out. terminate called after throwing an instance of 'std::runtime_error' what(): [Rank 4] NCCL watchdog thread terminated with exception: [Rank 4] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=2380, OpType=_ALLGATHER_BASE, NumelIn=384, NumelOut=3072, Timeout(ms)=1800000) ran for 1800698 milliseconds before timing out.

System info (please complete the following information):

  • OS: Debian 11
  • GPU count and types 1 node with 8 A100(80G) GPUs

Launcher context
Are you launching your experiment with the deepspeed launcher, MPI, or something else?
I'm launching with deepspeed launcher, with env variables NCCL_P2P_DISABLE=1 WANDB_MODE=disabled

@XenonLamb XenonLamb added bug Something isn't working training labels May 10, 2024
@XenonLamb
Copy link
Author

p.s. I think the way my case hangs is similar to this issue huggingface/transformers#28803 . However, after upgrading accelerate to 0.30.0, the issue is still not resolved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working training
Projects
None yet
Development

No branches or pull requests

1 participant