You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I just tried to train the model on a cpu but i ran into some Problems.
While Training i always get the output message, that the loss at iteration x is 0 which seems kinda odd:
NLL Loss @ epoch 0001 iteration 00000001 = 0.0000
NLL Loss @ epoch 0063 iteration 00000250 = 0.0000
After going through the code of gran_runner i realized that the part of the code, where the loss is calculated is never called when there is no gpu available since batch_fwd is empty in that case:
Hi, I just tried to train the model on a cpu but i ran into some Problems.
While Training i always get the output message, that the loss at iteration x is 0 which seems kinda odd:
After going through the code of gran_runner i realized that the part of the code, where the loss is calculated is never called when there is no gpu available since batch_fwd is empty in that case:
GRAN/runner/gran_runner.py
Lines 230 to 259 in 43cb443
Is this a bug or did i miss something?
The text was updated successfully, but these errors were encountered: