Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-GPU training and expected epochs #9

Open
bieltura opened this issue Jan 17, 2022 · 5 comments
Open

Multi-GPU training and expected epochs #9

bieltura opened this issue Jan 17, 2022 · 5 comments

Comments

@bieltura
Copy link

Hi,

First of all, thanks for the nice paper and release code. I am testing your model for a different dataset and two questions come up:

  1. Which is the estimated number of epochs to train the model? We have expierenced some degradation when the model is overtrained (overfitting?) the data.
  2. Is there a way to train the model in a multi-gpu setup? We have more GPUs available, however the code seems to just work on the first available GPU given by the CUDA_VISIBLE_DEVICES argument.

Thanks!

@ivanvovk
Copy link
Contributor

@bieltura Hi! Thank you for your interest in Grad-TTS work.

  1. Paper's Grad-TTS model was trained for 1.7mln iterations, which corresponds to approximately 2300 epochs. Usually, we trained our models up to 2000 epochs with mini-batch size 16 and 2sec speech fragments (out_size argument in params.py).
  2. Sorry, our code is not adopted for multi-GPU training, but you can easily change train.py or train_multi_speaker.py according to the best PyTorch multi-GPU training practices.

@bieltura
Copy link
Author

Hi @ivanvovk,

Thanks for the answering of the quesitons. Here's an update that my be helpful for future development:

DataParallel can not be implemented in the current setup, as compute_loss method is not within the forward pass of the model. The solution is to adapt forward to compute the loss function and generate another method for inference (in a single GPU).

Apart from that, I have found that using multiple GPUs, code breaks when, for a batch, the length of an audio sample is less than the 2sec speech fragments. The solution is to force the shape to be always this 2 sec (in frames).

y_cut_mask = sequence_mask(y_cut_lengths).unsqueeze(1).to(y_mask)
to
y_cut_mask = sequence_mask(torch.LongTensor([out_size] * len(y_cut_lengths))).unsqueeze(1).to(y_mask)

I still find that 2300 epochs in a single GPU is a very large amount of training. Did you follow any procedure to check when the modeled converged to the best checkpoint?

Thanks

@ivanvovk
Copy link
Contributor

@bieltura it is usually preferred to use DistributedDataParallel instead of DataParallel. It is faster, and if I am not mistaken, there are no such problems with forward pass at DDP setting.

What about checking the convergence of the model, we just checked the quality at 10 iterations, and when it became good, we stopped training. Nothing special.

@bieltura
Copy link
Author

bieltura commented Feb 8, 2022

Thanks! As a side note, we have been using the Energy metric (predicted-target difference) to check whether samples are "good enough" for evaluation. As you mentioned in your paper, diffusion loss is not informative in terms of model convergence, as it has to update to all possible steps from 0 to T (and this is picked up randomly). Here are some plots that may be useful to you as well. Feel free to close the issue when you read it :) And again, thanks for everything.

image

image

@iooops
Copy link

iooops commented Apr 21, 2023

For my case, I found Accelerate very useful: https://github.com/huggingface/accelerate with just several lines of code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants