You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jul 5, 2021. It is now read-only.
Hmm this is very interesting. I also found this issue which references the same thing. Although it seems to have been closed and resolved a while ago, so not too sure.
However, I haven't had any issues with training the networks and many others were able to train just fine. Basically I'm saying I would have assumed results would be much worse if batch norm wasn't working.
Perhaps the best thing to do is a quick test. Training using each setup and see if there is a big difference in results. Maybe just a few epochs to get some quick results. Even after such a short time, if batch norm was indeed "off" before we should see a difference.
The following is what I found in main.py
However this doesn't properly handle batcn_norm, and it is recommended that
opt
is created byPlease refer to this and search UPDATE_OPS
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/slim/python/slim/learning.py
Basically, batch_norm needs UPDATE_OPS to function, but optimizer.minimize doesn't handle that.
Please correct me if I'm wrong.
The text was updated successfully, but these errors were encountered: