You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
All attention papers feature some visualization of the attention weights on some input. Has anyone been able to run a sample through the Seq2Seq Attention Decoder model in translate.py and get the attention activations to do such a visualization?
The text was updated successfully, but these errors were encountered:
It should be easy to fetch it out during a run call and visualize it. You can try posting this to StackOverflow to see if someone in the general community has done this visualization. I am closing this issue, since we have the required functionality in TensorFlow.
All attention papers feature some visualization of the attention weights on some input. Has anyone been able to run a sample through the Seq2Seq Attention Decoder model in
translate.py
and get the attention activations to do such a visualization?The text was updated successfully, but these errors were encountered: