You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For the modulator, your code just add the x_windows tensor with the weight of Embedding layer.
I wonder that layers can be learned in this case.
Is this your intended implementation?
The text was updated successfully, but these errors were encountered:
Thanks for the new code update.
I have a question about the modulator and cross-modulator in decoder.
In the latest code (2022-07-04), the modulator layer is an
Embedding
layer.In the
forward
function, you just use the weights of modulator to add it withx_windows
For the
modulator
, your code just add thex_windows
tensor with the weight ofEmbedding
layer.I wonder that layers can be learned in this case.
Is this your intended implementation?
The text was updated successfully, but these errors were encountered: