Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Can We implement Flash attention 2 in MXnet #21222

Open
rajveer43 opened this issue Oct 5, 2023 · 1 comment
Open

Can We implement Flash attention 2 in MXnet #21222

rajveer43 opened this issue Oct 5, 2023 · 1 comment

Comments

@rajveer43
Copy link

Description

Flash Attention 2 is a library that provides attention operation kernels for faster and more memory efficient inference and training:

References

@rajveer43 rajveer43 changed the title Can We implement Flash attention 2 in MXNET Can We implement Flash attention 2 in MXnet Oct 5, 2023
@github-actions
Copy link

github-actions bot commented Oct 5, 2023

Welcome to Apache MXNet (incubating)! We are on a mission to democratize AI, and we are glad that you are contributing to it by opening this issue.
Please make sure to include all the relevant context, and one of the @apache/mxnet-committers will be here shortly.
If you are interested in contributing to our project, let us know! Also, be sure to check out our guide on contributing to MXNet and our development guides wiki.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

1 participant