Skip to content

v0.4.0

Compare
Choose a tag to compare
@dakinggg dakinggg released this 22 Nov 03:45

馃殌 LLM Foundry v0.4.0

LLM Foundry is an efficient codebase for training, evaluating, and deploying Large Language Models (LLMs) and serves as the foundation for the MPT-7B and MPT-30B models.

In addition to the usual bug fixes and performance improvements, we've added lots of new features!

New Features

Automatic sequence packing (#683)

You can now specify packing_ratio: auto under your finetuning dataset, to automatically profile and select a good packing ratio to efficiently pack your sequences together on the fly during finetuning. This can dramatically reduce the amount of compute wasted on padding tokens.

Flash Attention 2 (#651, #666, #672)

We now support using Flash Attention 2 both in MPT and in any model that supports Flash Attention 2 via the Transformers library. See the training instructions to learn how to use the different versions of Flash Attention.

New PyTorch, Composer, Streaming, and Transformers versions (#648, #672, #736)

As always, we've updated to new versions of the core dependencies of LLM Foundry, bringing better performance, new features, and support for new models (codellama and mistral in particular).

Easy Databricks model deployment (#618)

We've made it much easier to go from a training run to a served model using Databricks model serving. To make use of this feature, you need to specify both an MLFlowLogger and a HuggingFaceCheckpointer for your run.

The MLFlowLogger should have a Unity Catalog model registry prefix in the form of catalog.schema. This specifies where to register your models to. For example,

loggers:
    mlflow:
        experiment_name: /Users/first.last@email.com/my_experiment_name,
        tracking_uri: databricks,
        model_registry_prefix: catalog.schema,
        model_registry_uri: databricks-uc,

The HuggingFaceCheckpointer should specify the name you want to register the model under. For example,

callbacks:
    hf_checkpointer:
        save_interval: 1ep # Save Hugging Face formatted checkpoints each epoch
        save_folder: s3://bucket/path/to/my/checkpoints
        mlflow_registered_model_name: my_model_name # Final model will be registered to catalog.schema.my_model_name

MPT model configurations

We've added a few new options when training with the MPT architecture in LLM Foundry.

  • Rotary embeddings (#675)
  • (Un)Tied word embeddings (#728)
  • Fine grained activation checkpointing (#720)

Evaluation Improvements

We've released v0.1 of our Eval Gauntlet (#674, #748)! This adds many new benchmarks, chain-of-thought, and a new safety category. Check out the README for full details!

In addition, we've made a few improvements to our evaluation options, with more to come!

  • Allow specifying multiple evaluation datasets to compute cross entropy and perplexity on during training (#603)
  • Easier versions of the HumanEval dataset, which can be useful for comparing smaller models (#645)
  • More options for averaging the results of the Eval Gauntlet (#640)

New pretraining benchmarks (#543)

Added H100 profiling results to our benchmarking table.

Quality of life improvements

  • Improved Generate callback with more logging options. Use the Generate callback to log generations from your model over the course of training. (#631)
  • Count number of tokens during training excluding padding tokens. Previously this count included padding tokens. (#676)
  • Use the PyTorch profiler to profile your training runs. (#678)
  • A convenience script for using the much faster Hugging Face snapshot_download to download models from the Hugging Face Hub. (#708)
  • New AWS specific Docker images with LLM Foundry dependencies pre-installed. (#731)

Experimental features

Inverse square root learning rate scheduler (#657)

We've added experimental support for the inverse square root learning rate scheduler.

Breaking changes

Updated Streaming defaults (#723)

We've upgraded to the latest Streaming version, including vastly improved default settings for partitioning and shuffling. This means that if you were using the defaults, you will get different results after upgrading. The new defaults should be more performant for the large majority of use cases. See the Streaming release notes for more details.

Removed support for PrefixLM for Bloom and OPT models (#704)

We occasionally remove unused experimental parts of the code base to focus on new features and better support for existing features, and we've removed support for PrefixLM applied to Bloom and OPT models in this release.

What's Changed

New Contributors

Full Changelog: v0.3.0...v0.4.0