Skip to content

Release 2.0.0

Compare
Choose a tag to compare
@neopro12 neopro12 released this 20 Jun 06:28
· 207 commits to master since this release
7af013e

It's been a long time since our last release (v1.2.0). For the past six months, we have focused on training efficiency.

In this release, LightSeq supports fast training for models in the Transformer family!

We provide highly optimized custom operators for PyTorch and TensorFlow, which cover the entire training process for Transformer-based models. Users of LightSeq can use these operators to build their own models with efficient computation.

In addition, we integrate our custom operators into popular training libraries like Fairseq, Hugging Face, NeurST, which enables a 1.5X-3X end-to-end speedup campred to the native version.

With only a small amount of code, you can enjoy the excellent performance provided by LightSeq. Try it now!

Training

  • support lightseq-train to accelerate fairseq training, including optimized transformer model, adam, and label smoothed loss
  • huggingface bert training example
  • neurst transformer training example for Tensorflow users

Inference

  • support GPT python wrapper
  • inference APIs are moved to lightseq.inference

This release has API change for inference, all inference API has moved to lightseq.inference. For example, use import lightseq.inference and model = lightseq.inference.Transformer("$PB_PATH", max_batch_size)