Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Visualize self-attention matrix #178

Open
gugababa opened this issue Aug 29, 2023 · 6 comments
Open

Visualize self-attention matrix #178

gugababa opened this issue Aug 29, 2023 · 6 comments
Labels
enhancement New feature or request new feature Proposing to add a new feature

Comments

@gugababa
Copy link

1. Feature description

Hi! Would it be possible to add an option to visualize the final (and intermediate) self-attention maps/matrices for the SAITS model? Thank you!

2. Motivation

My work is currently using the SAITS model to impute a spectrum to reduce the acquisition time of an MRI scan. I would like to find the optimal acquisition protocol by determining the attention of each point, as points that have a higher attention score are more likely to be included in the acquisition protocol.

3. Your contribution

I can contribute to the code base and try to add this feature myself, although this may take some time as I will have to parse through the repo.

@gugababa gugababa added enhancement New feature or request new feature Proposing to add a new feature labels Aug 29, 2023
@WenjieDu
Copy link
Owner

Hi there 👋,

Thank you so much for your attention to PyPOTS! You can follow me on GitHub to receive the latest news of PyPOTS. If you find PyPOTS helpful to your work, please star⭐️ this repository. Your star is your recognition, which can help more people notice PyPOTS and grow PyPOTS community. It matters and is definitely a kind of contribution to the community.

I have received your message and will respond ASAP. Thank you for your patience! 😃

Best,
Wenjie

@WenjieDu
Copy link
Owner

WenjieDu commented Aug 30, 2023

Hey @gugababa, thanks for starting this issue! Your request is similar to the one @vemuribv asked for in #177. You both want something from the representation learned by models, not the final results, but could be useful to analyze models' behavior. Sounds reasonable and necessary. So let's make it!

Could you please make a PR to add a function that helps visualize the SAITS model's attention matrix? I will adjust the framework API to let the model return its attention matrix for your function to do the visualization task. After your PR gets merged, you will be listed in PyPOTS contributors https://pypots.com/about/#all-contributors

What do you think? 😃

@gugababa
Copy link
Author

gugababa commented Aug 30, 2023 via email

@WenjieDu
Copy link
Owner

WenjieDu commented Aug 30, 2023 via email

@gugababa
Copy link
Author

gugababa commented Sep 1, 2023 via email

@WenjieDu
Copy link
Owner

WenjieDu commented Sep 4, 2023

Hi Anshu,

Self-attention calculates the similarities between time steps, and the attention weight map represents the similarities. For each attention layer, the shape of the attention weights is [batch_size, n_heads, n_steps, n_steps]. So that depends on whether you want to visualize all layers' attention weights or only the last layer's. Regarding the weighted-combination block in SAITS, yes, the weights are averaged across the heads, so its shape is [batch_size, n_steps, n_steps], and they're taken from the last layer of the 2nd DMSA block.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request new feature Proposing to add a new feature
Projects
None yet
Development

No branches or pull requests

2 participants