Skip to content

Official code and checkpoints for "Timer: Transformers for Time Series Analysis at Scale"

License

Notifications You must be signed in to change notification settings

thuml/Large-Time-Series-Model

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Timer (Large Time Series Model)

This repo provides official code and checkpoints for Timer: Transformers for Time Series Analysis at Scale, a Large Time Series Model for unified time series and tasks.

Updates

🚩 News (2024.5) Our paper is accepted by ICML 2024.

🚩 News (2024.4) Online API interface is coming soon, supporting zero-shot forecasting!

🚩 News (2024.4) The pre-training scale has been extended to 15B time points, exhibiting the zero-shot capability.

🚩 News (2024.2) Checkpoint model on UTSD-4G is available.

🚩 News (2024.2) Releasing the fine-tune code for forecasting.

Introduction

Time Series Transformer (Timer) includes GPT-style Transformers pre-trained on multi-domain time series as Large Time Series Model (LTSM). [Project Page]

We curate large-scale datasets comprised of 1B time points, proposing a unified training strategy with single-series sequence, and presenting Timer with the decoder-only architecture. As a LTSM, Timer is enabled with:

  • Generalization ability that one model fits all domains.

  • Task generality that one model copes with various tasks.

  • Scalability that the performance increases with the scale of pre-training.

Showcases

Forecasting with data scarcity (limited downstream training samples)

Segment-level imputation with few-shot samples

On-the-fly anomaly detection on UCR Anomaly Archive

Approach

Large Dataset

We curate Unified Time Series Dataset (UTSD) that includes 7 domains with up to 1 billion time points with hierarchical capacities to facilitate research of scalability and domain transfer.

Pre-training Strategy

To facilitate pre-training on extensive time series, we convert heterogeneous series into single-series sequence (S3), reserving the patterns of series variations with the unified context length, towards the well-established tokenization like natural language.

300

Model Architecture

With the substantial progress of decode-only large language models and evaluation of other backbone alternatives, we adopt the GPT-style Transformer with autoregressive generation towards LTSMs.

300

Unified Generative Task Formulation

Timer is applicable on various tasks, which is realized in the unified generative approach.

300

Performance

We compare Timer with state-of-the-art approaches and present the pre-training benefit on data-scarce scenarios, known as the few-shot cpability of large models.

300

Scalability

By increasing the parameters and pre-training scale, Timer achieves notable performance improvement: 0.231 $\to$ 0.138 (−40.3%), surpassing the previous state-of-the-art deep forecasters.

300

Flexible Sequence Length

The decoder-only architecture provides additional flexibility to accommodate time series of different lookback and forecast lengths.

300

Code for Fine-tuning

  1. Install Pytorch and necessary dependencies.
pip install -r requirements.txt
  1. Put the datasets [Google Drive] [Tsinghua Cloud] under the folder ./dataset/.

  2. Download the pre-trained checkpoints and put them under the folder ./checkpoints/.

  3. Train and evaluate the model. We provide the above tasks under the folder ./scripts/.

# forecasting
bash ./scripts/forecast/ECL.sh

# TODO: segement-level imputation
bash ./scripts/imputation/ECL.sh

# TODO: anomaly detection on the fly
bash ./scripts/anomaly_detection/UCR.sh
  1. Training on custom data: Tutorials are provided in this repo.

Future Work

We are preparing to provide the online service for zero-shot forecasting (demo). Please stay tuned for the update!

Citation

If you find this repo helpful, please cite our paper.

@article{liu2024timer,
  title={Timer: Transformers for Time Series Analysis at Scale},
  author={Liu, Yong and Zhang, Haoran and Li, Chenyu and Huang, Xiangdong and Wang, Jianmin and Long, Mingsheng},
  journal={arXiv preprint arXiv:2402.02368},
  year={2024} 
}

Acknowledgement

We appreciate the following GitHub repos a lot for their valuable code and efforts.

Contact

If you have any questions or want to use the code, feel free to contact: