Skip to content

Releases: mosaicml/composer

v0.22.0

01 May 16:59
Compare
Choose a tag to compare

What's New

🔥 Support for PyTorch v2.3.0

Composer now supports the recently-released PyTorch version 2.3.0! Please raise any issues with us so we can address them.

Bug Fixes

  • Fixing checks for device microbatch size for sequence parallelism in #3200
  • Fixing token logging in #3206
  • Search for run name in MLFlowLogger in #3215
  • Fix FQN names with activation checkpointing in #3210
  • Strict weight matching for checkpoint loading in #3219

What's Changed

Full Changelog: v0.21.3...v0.22.0

v0.21.3

19 Apr 15:41
Compare
Choose a tag to compare

Bug Fixes

1. Increased Robustness to Checkpoint Loading

We've patched several edge cases in loading sharded checkpoints, especially with DTensors, which should decrease memory usage when loading checkpoints. We've also hardened retry logic against object cloud failure, ensuring higher robustness to transient network issues.

What's Changed

New Contributors

Full Changelog: v0.21.2...v0.21.3

v0.21.2

03 Apr 21:14
Compare
Choose a tag to compare

Bug Fixes

1. Enable torch 2.2.2 (#3161)

Composer currently monkeypatches PyTorch for nightly versions in order to fix upstream bugs. With the release of torch 2.2.2, these monkeypatches were mistakenly applied to the stable release due to incorrect gating on imports. This release fixes the gating, enabling torch 2.2.2.

2. MPS Metric Computation on CPU (#3105)

Due to bugs in computing torchmetrics on Mac devices, we move metric computation onto CPU. This previously had issues with data not properly moving to CPU.

Thank you to @hyenal for this contribution!

3. Batch Sampler Support (#3105)

Composer now supports batch sampler, which previously resulted in an error if specified in the dataloader.

Thank you to @Ghelfi for this contribution!

What's Changed

New Contributors

Full Changelog: v0.21.1...v0.21.2

v0.21.1

22 Mar 01:08
Compare
Choose a tag to compare

Bug Fixes

1. Fix to HSDP checkpoint loading

The previous release broke checkpoint loading when using HSDP with mutliple replicas. This patch release fixes checkpoint loading.

What's Changed

Full Changelog: v0.21.0...v0.21.1

v0.21.0

21 Mar 21:19
Compare
Choose a tag to compare

What's New

1. Aggregate Memory Monitoring (#3042)

The Memory Monitor callback now supports aggregating memory statistics across nodes. Getting summary stats for a run's memory usage across the cluster can dramatically help debug straggler nodes or non-homogenous workloads. The memory monitor can now aggregate and log combined values at a user specified frequency.

Example:

from composer import Trainer
from composer.callbacks import MemoryMonitor

trainer = Trainer(
    model=model,
    train_dataloader=train_dataloader,
    optimizers=optimizer,
    max_duration="1ep",
    callbacks=[
        MemoryMonitor(
            dist_aggregate_batch_interval=10,  # aggregate every 10 batches
        )
    ],
)

2. Advanced Compression Options (#3118)

Large model checkpoints can be expensive to store and transfer. In this release, we've upgraded our compression support to accept several new formats which result in better compression-time tradeoffs using CLI tools. In order to use compression, you can post-fix your checkpoint name with a compression path. We know support the following extensions:

  • bz2
  • gz
  • lz4
  • lzma
  • lzo
  • xz
  • zst

Example:

from composer import Trainer
from composer.callbacks import MemoryMonitor

trainer = Trainer(
    model=model,
    train_dataloader=train_dataloader,
    optimizers=optimizer,
    max_duration="1ep",
    save_filename='ep{epoch}-ba{batch}-rank{rank}.pt.lz4',
)

Thank you to @mbway for adding this support!

What's Changed

New Contributors

Full Changelog: v0.20.1...v0.21.0

v0.20.1

27 Feb 19:51
Compare
Choose a tag to compare

What's New

1. Torch 2.2.1 Support

Composer now supports torch 2.2.1! We've raised the pin to allow the latest torch, and we've upstreamed all torch monkeypatches so Composer can run out of the box with the latest and greatest torch features.

What's Changed

v0.20.0

23 Feb 18:39
9ecea4f
Compare
Choose a tag to compare

What's New

1. New Neptune Logger

Composer now supports logging training data to neptune.ai using the NeptuneLogger. To get started:

neptune_project = 'test_project'
neptune_api_token = 'test_token'

neptune_logger = NeptuneLogger(
    project=neptune_project,
    api_token=neptune_api_token,
    rank_zero_only=False,
    mode='debug',
    upload_artifacts=True,
)

We also have an example project demonstrating all the awesome things you can do with this integration!

image

Additional information on the NeptuneLogger can be found in the docs.

2. OOM observer callback with memory visualizations

Composer now has an OOM observer callback. When a model runs out of memory, this callback helps produce a trace which identifies memory allocations, which can be critical to designing strategies to mitigate memory usage.

Example:

from composer import Trainer
from composer.callbacks import OOMObserver
# constructing trainer object with this callback
trainer = Trainer(
    model=model,
    train_dataloader=train_dataloader,
    eval_dataloader=eval_dataloader,
    optimizers=optimizer,
    max_duration="1ep",
    callbacks=[
        OOMObserver(
            folder="traces",
            overwrite=true,
            filename="rank{rank}_oom",
            remote_filename="oci://bucket_name/{run_name}/oom_traces/rank{rank}_oom",
        )
    ],
)

OOM Visualization:

Screenshot 2024-02-23 at 9.30.03 AM

3. Log all gpu rank stdout/err to MosaicML platform

Composer has expanded it's integration with the MosaicML platform.. Now, we can view all gpu rank stdout/stderrs with MCLI logs to enable more comprehensive analysis of jobs.

Example:

mcli logs <run-name> --node x --gpu x 

Note, this defaults to node rank 0 if --node is not provided.

Also, we can find the logs of any global gpu rank with the command:

mcli logs <run-name> --global-gpu-rank x

Bug Fixes

What's Changed

New Contributors

**Full Chang...

Read more

v0.19.1

08 Feb 20:49
Compare
Choose a tag to compare

What's New

1. New Event: BEFORE_LOAD (#2974)

Composer now has the events Event.BEFORE_LOAD, which lets users modify state before a model is loaded. This is particularly useful for accessing certain attributes which may not exist at Event.INIT, such as the dataloader state.

2. Registering model in MLFlow with run id (#2967)

The MLFlow logger now has register_model_with_run_id, which allows users to register a model based on the run_id. This is a different way of registering the model which preserves the link to the mlflow runs.

What's Changed

  • before_load event added #2974
  • Add a register_model_with_run_id api to MLflowLogger #2967

Full Changelog: v0.19.0...v0.19.1

v0.19.0

02 Feb 09:07
Compare
Choose a tag to compare

What's New

1. Improved DTensor Support

Composer now supports elastic saving and loading of DTensors at various mesh sizes.

2. Checkpoint Saving and Loading from Databricks MLFlow

Composer now supports saving and loading checkpoints to Databricks-managed MLFlow.

composer_model = MyComposerModel(...)

trainer = Trainer(
      model=composer_model,
      save_folder= 'dbfs:/databricks/mlflow-tracking/{mlflow_experiment_id}/{mlflow_run_id}/artifacts',
      logger=MLFlowLogger(...),
      load_path= 'dbfs:/databricks/mlflow-tracking/{mlflow_experiment_id}/{mlflow_run_id}/artifacts',
      ...
)

3. Better Communication Computation Overlap in FSDP

Composer now has improved communication/computation overlap in our FSDP code which should improve MFU across several architectures.

4. Python3.11 + Torch2.2 Support

Initial support of Python3.11 + Torch2.2 added in Composer.

5. PEFT LoRA

PEFT LoRA is now supported in the HuggingFaceModel class.

6. Refactored Evaluation

in_context_learning_evaluation.py has a new design with cleaner abstractions and easier interfaces to work wtih.

7. Azure Checkpointing

Composer now supports saving your model in Azure.

8. MLFlow Checkpointing

Composer now supports saving your model in MLFlow.

Bug Fixes

What's Changed

New Contributors

Read more

v0.18.2

01 Feb 04:48
Compare
Choose a tag to compare

Bug Fixes

What's Changed

Full Changelog: v0.18.1...v0.18.2