Releases: lucidrains/memorizing-transformers-pytorch
Releases · lucidrains/memorizing-transformers-pytorch
0.4.1
0.4.0
prepare to use knn attention in another repository, for the ultimate … …long context attention in the world
0.3.10
0.3.10
0.3.9a
fix setup.py
0.3.9
address https://github.com/lucidrains/memorizing-transformers-pytorch… …/issues/10
0.3.8
use the new einops unpack! thank you @arogozhnikov 🙏
0.3.7
just give knn attention its own relative positional bias
0.3.6
give knn attention layer one more way to tune out local if need be
0.3.5
allow the network to pay more attention to memory later into training… …, if need be
0.3.4
turn KNN attention into full cosine sim attention (from the paper que… …ry-key normalization for NLP), as it makes the most sense given the l2norm the paper did with the memory keys. give initial low temperature, beat the baseline finally