Skip to content

Releases: lucidrains/memorizing-transformers-pytorch

0.4.1

17 Jul 00:08
Compare
Choose a tag to compare
address https://github.com/lucidrains/memorizing-transformers-pytorch…

…/issues/16

0.4.0

24 Mar 18:27
Compare
Choose a tag to compare
prepare to use knn attention in another repository, for the ultimate …

…long context attention in the world

0.3.10

30 Nov 05:25
25db923
Compare
Choose a tag to compare
0.3.10

0.3.9a

09 Nov 22:09
Compare
Choose a tag to compare
fix setup.py

0.3.9

09 Nov 22:01
Compare
Choose a tag to compare
address https://github.com/lucidrains/memorizing-transformers-pytorch…

…/issues/10

0.3.8

09 Nov 21:46
Compare
Choose a tag to compare
use the new einops unpack! thank you @arogozhnikov 🙏

0.3.7

23 Apr 22:43
Compare
Choose a tag to compare
just give knn attention its own relative positional bias

0.3.6

23 Apr 22:18
Compare
Choose a tag to compare
give knn attention layer one more way to tune out local if need be

0.3.5

23 Apr 21:57
Compare
Choose a tag to compare
allow the network to pay more attention to memory later into training…

…, if need be

0.3.4

23 Apr 21:36
Compare
Choose a tag to compare
turn KNN attention into full cosine sim attention (from the paper que…

…ry-key normalization for NLP), as it makes the most sense given the l2norm the paper did with the memory keys. give initial low temperature, beat the baseline finally