Issues: idiap/fast-transformers
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[WinError 2] The system cannot find the file specified: build_ext
#129
opened Feb 29, 2024 by
cliffordkleinsr
Understanding how to define key, query and value for the cross attention calculation
#119
opened Dec 18, 2022 by
neuronphysics
how causal mask constructed in training batch model with linear causal attention?
#109
opened Nov 26, 2021 by
Howuhh
local_dot_product_cuda fails when queries and keys have different lengths
#98
opened Aug 8, 2021 by
tridao
Linear Transformers are Fast Weight Memory Systems
new-attention
Add a new attention implementation
#70
opened Mar 10, 2021 by
angeloskath
Feature request: L2 self-attention
new-attention
Add a new attention implementation
#60
opened Jan 19, 2021 by
ketyi
Previous Next
ProTip!
Type g p on any issue or pull request to go back to the pull request listing page.