Skip to content

Releases: lucidrains/PaLM-pytorch

0.2.2

29 Jul 20:36
7164d13
Compare
Choose a tag to compare

v0.2.1

20 Jun 03:28
Compare
Choose a tag to compare
fix alibi in palm lite

v0.2.0a

18 Jun 15:48
Compare
Choose a tag to compare
release palm lite version, thanks to @conceptofmind

v0.2.0

18 Jun 15:46
Compare
Choose a tag to compare
release palm lite version, thanks to @conceptofmind

0.1.0

22 Apr 15:06
Compare
Choose a tag to compare
fuse the attention and feedforward projections, thanks to @feifeibear…

… for alerting me to this

0.0.12

05 Apr 23:58
Compare
Choose a tag to compare
fix rotary embedding caching

0.0.11

05 Apr 17:45
Compare
Choose a tag to compare
start chipping away at Triton version of PaLM, use causal numerically…

… stable softmax (no need for causal mask) + bias-less layernorm, modified from Phil Tillets layernorm tutorial, cite Triton

0.0.10

05 Apr 03:06
Compare
Choose a tag to compare
0.0.10a

cache causal mask and rotary embeddings within attention module

0.0.9

04 Apr 23:47
Compare
Choose a tag to compare
fix prelayernorm in attention

0.0.8

04 Apr 23:25
Compare
Choose a tag to compare
add enwik8 training