You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was using this repo, trying to resume training your pretrained model using my own dataset. I already prepared the dataset as the readme tutorial guided. The problem now is; I ran the train_rnn.py script. Yes, I successfully run and freeze at:
20-09-29 15:51:21 [train_rnn.py:66 - INFO ] Building the model of Dual-Path-RNN
20-09-29 15:51:21 [train_rnn.py:69 - INFO ] Building the optimizer of Dual-Path-RNN
20-09-29 15:51:21 [train_rnn.py:72 - INFO ] Building the dataloader of Dual-Path-RNN
It's been an hour and I saw the htop my RAM increasing above 200G. Is this normal or the script is running the dataset on the fly?
Addition: I am using your default .yml config, batch size also 1. Is adjusting larger batch size help this problem?
FYI, for each speaker my dataset size s 60-ish GB. So it means, total dataset for speaker 1 & 2, and mix are about 180 GB.
The tasks also increasing, is it normal?
The text was updated successfully, but these errors were encountered:
Hello there,
I was using this repo, trying to resume training your pretrained model using my own dataset. I already prepared the dataset as the readme tutorial guided. The problem now is; I ran the
train_rnn.py
script. Yes, I successfully run and freeze at:It's been an hour and I saw the htop my RAM increasing above 200G. Is this normal or the script is running the dataset on the fly?
Addition: I am using your default .yml config, batch size also 1. Is adjusting larger batch size help this problem?
FYI, for each speaker my dataset size s 60-ish GB. So it means, total dataset for speaker 1 & 2, and mix are about 180 GB.
The tasks also increasing, is it normal?
The text was updated successfully, but these errors were encountered: