Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rethinking and Improving Relative Position Encoding for Vision Transformer with memory optimized attentions #142

Open
jakubMitura14 opened this issue Dec 29, 2022 · 1 comment
Labels

Comments

@jakubMitura14
Copy link

Hello I was wondering whether your relative positional encoding schemes would work with approximate attention mechanisms for example like presented in flash attention https://arxiv.org/abs/2205.14135

@wkcn wkcn added the iRPE label Dec 29, 2022
@wkcn
Copy link
Contributor

wkcn commented Dec 29, 2022

Thanks for your attention to our work!

Let me read the paper and check whether RPE works with approximate attention mechanisms.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants