New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a note in the docs about the momentum formulation used in optim #1099
Comments
For a fixed learning rate, the two formulations are equivalent. The Torch formulation is chosen because the the step size is directly proportional to the learning rate. This means that if you decrease the learning rate, the step size decreases immediately, and not after some number of iterations, which is generally what you want. |
I agree. My only concern was that, given that the reference for the method is the Sutskever paper and there is no documentation to explain the difference, the current implementation could be a potential "gotcha" for folks moving to PyTorch from other frameworks. |
@keskarnitish if you send a PR adding a note to the docs, I am happy to merge. |
…warnings. Remove component-specific include dirs from include path (pytorch#1099)
…h#1099) * clip before reduce scatter * provide clip before/after RS option * change to clip after ar (avoid confusion) * fix comments
…warnings. Remove component-specific include dirs from include path (pytorch#1099)
I have been looking at the implementation of SGD + Momentum in PyTorch and noticed something a bit different from how other packages (and papers) describe it. For the moment, let's focus solely on (classical) momentum and not Nesterov's version.
At the time of writing, the implementation reads:
Mathematically, if we denote the momentum buffer by
v
and assume thatdampening=0
, at every iteration, the buffer is updated asv = m*v + g
and the step is∆x = lr * v
. Notice that the learning ratelr
hits the momentum termv
as well as the gradient. To me, this is different from what classical momentum is, and also differs from how other packages implement SGD+M.Let us contrast this with the Sutskever et. al. paper and other commonly used pacakges such as Lasagne, Keras, Neon, etc.
Sutskever et. al.
The snippet of the relevant section is pasted below.
Retaining the syntax from above, the algorithm updates
v
asv = m*v - lr * g
with the step∆x = v
. So, the learning ratelr
only hits the gradient. It does not (explicitly) influence the effect of the momentum term which is in contrast with PyTorch's implementation.Lasagne
Lasagne employs the same rule as suggested in Sutskever for momentum.
Keras
Same for Keras:
Neon
and Neon.
Is the disparity true or am I missing something important?
The difference between the two implementations is not insignificant and especially so when
lr
is reduced along the way. If my claim is true, maybe we could update the reference (I'm not sure what that would be) or include the above version in the SGD code (I can take this up if necessary)?The text was updated successfully, but these errors were encountered: