Skip to content

Release 3.0.0

Compare
Choose a tag to compare
@godweiyang godweiyang released this 25 Oct 02:42
· 34 commits to master since this release
b665742

It's been a long time since our last release (v2.2.0). For the past one year, we have focused on int8 quantization.

In this release, LightSeq supports int8 quantized training and inference. Compared with PyTorch QAT, LightSeq int8 training has a speedup of 3x without any performance loss. Compared with previous LightSeq fp16 inference, int8 engine has a speedup up to 1.7x.

LightSeq int8 engine supports multiple models, such as Transformer, BERT, GPT, etc. For int8 training, the users only need to apply quantization mode to the model using model.apply(enable_quant). For int8 inference, the users only need to use QuantTransformer instead of fp16 Transformer.

Other releases include supporting models like MoE, fix bugs, performance improvement, etc.