New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running multi-gpu hangs after first step #18
Comments
@jpfeil ah, i don't know from first glance at the code, and don't have access to multi-gpu at the moment |
@jpfeil did you resolve the other two issues that are open on single gpu? |
@lucidrains I couldn't get multi-gpu to work, so I'm moving forward with single-gpu. I tried running imagenet, but I get the adaptive adversarial weight going to nan which causes the loss to become nan: LossBreakdown(recon_loss=tensor(0.0777, device='cuda:0', grad_fn=), lfq_aux_loss=tensor(0.0022, device='cuda:0', grad_fn=), quantizer_loss_breakdown=LossBreakdown(per_sample_entropy=tensor(0.0003, device='cuda:0', grad_fn=), batch_entropy=tensor(0.0003, device='cuda:0', grad_fn=), commitment=tensor(0.0024, device='cuda:0', grad_fn=)), perceptual_loss=tensor(0.2947, device='cuda:0', grad_fn=), adversarial_gen_loss=tensor(0.0186, device='cuda:0', grad_fn=), adaptive_adversarial_weight=tensor(nan, device='cuda:0'), multiscale_gen_losses=[], multiscale_gen_adaptive_weights=[]) Is there a check we can add here that will allow the training to continue? |
@jpfeil ahh, hard to know without doing training myself and ironing out the issues try 0.1.43, and if that doesn't work, i'll get around to it this weekend |
Same issue, it hangs when training with multi-gpus. |
Caught the same problem here. Multi-GPU training would stuck in step 1 while single-GPU training works fine. def train_step(self, dl_iter):
for grad_accum_step in range(self.grad_accum_every):
....
is_last = grad_accum_step == (self.grad_accum_every - 1)
context = partial(self.accelerator.no_sync, self.model) if not is_last else nullcontext
data, *_ = next(dl_iter)
self.print(f'accum step {grad_accum_step} {data} {data.shape}')
with self.accelerator.autocast(), context():
loss, loss_breakdown = self.model(
data,
return_loss = True,
adversarial_loss_weight = adversarial_loss_weight,
multiscale_adversarial_loss_weight = multiscale_adversarial_loss_weight
)
self.print(f'l355 loss {loss.shape} {loss}')
self.accelerator.backward(loss / self.grad_accum_every) # stuck here in the last accum step
self.print('l357 backward') # This will never print until timeout (only the last accum iter in the second step) Also I found that there is a warning at the same time(last accum backward step) from the first step, reporting as UserWarning: Grad strides do not match bucket view strides. This may indicate grad was not created according to the gradient layout contract, or that the param's strides changed since DDP was constructed. This is not an error but may impair performance.
grad.sizes() = [32, 64, 1, 1], strides() = [64, 1, 64, 64]
bucket_view.sizes() = [32, 64, 1, 1], strides() = [64, 1, 1, 1] I'm not sure if they are related problems. |
I've done some debugging. I believed that some reasons caused this hanging, such as my linux kernel is too old that it can't support latest version of torch and accelerate, or unsupported mixed-precision. However, it turns out my problem is actually highly related to https://discuss.pytorch.org/t/torch-distributed-barrier-hangs-in-ddp/114522/7. It is the validation in the main process caused the stuck. def valid_step(...):
# self.model(...)
# change the upper line to use local model instead of DDP model
self.model.module(...) This has solved my multi-GPU training stuck problem. |
@ziyannchen hey, thanks for the debug do you want to see if 0.4.3 works without your modification? |
I'm using accelerate multi-gpu support to run on a cluster of A100 gpus.
I can train on a single GPU, but multi-gpu hangs for me. Is there a recommended configuration for running multi-GPU training?
The text was updated successfully, but these errors were encountered: