Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to do model fine tuning? #12

Open
colemanhindes opened this issue Mar 15, 2024 · 10 comments
Open

How to do model fine tuning? #12

colemanhindes opened this issue Mar 15, 2024 · 10 comments
Labels
FAQ Frequently asked question

Comments

@colemanhindes
Copy link

Really cool project! Enjoy the paper and have had fun testing it out. Will instructions on fine tuning be released?

Thanks for your time

@abdulfatir
Copy link
Contributor

@colemanhindes Thanks for your interest. We are planning to release the training scripts soon but due to some other engagements there's no ETA yet. In the meantime, @canerturkmen and @shchur are working towards integrating Chronos into AutoGluon-TimeSeries (autogluon/autogluon#3978) and they're also planning to offer ways of fine-tuning the models.

@lostella lostella added the FAQ Frequently asked question label Mar 18, 2024
@lostella lostella pinned this issue Mar 18, 2024
@lostella lostella changed the title Fine tuning? How to do model fine tuning? Mar 19, 2024
@abdulfatir abdulfatir unpinned this issue Mar 26, 2024
@Saeufer
Copy link

Saeufer commented Mar 30, 2024

+1 for this, if possible please mind #22 too for some custom data. Thanks!

@HALF111
Copy link

HALF111 commented Apr 11, 2024

+1, looking forward to releasing the scripts of training and fine-tuning!

1 similar comment
@TPF2017
Copy link

TPF2017 commented Apr 15, 2024

+1, looking forward to releasing the scripts of training and fine-tuning!

@0xrushi
Copy link

0xrushi commented May 1, 2024

I caught a glimpse of it and noticed it's utilizing a torch.nn model. I've put together this notebook for training/finetuning. Could someone verify if it's set up correctly? The losses seem unusual, but I suspect it's due to the dataset being quite small and my use of:

sequence_length = 10
prediction_length = 5

notebook: here

@iganggang
Copy link

+1 for this, if possible please mind #22 too for some custom data. Thanks!

@lostella
Copy link
Contributor

lostella commented May 9, 2024

Training and fine-tuning script was added in #63, together with configurations that were used for pretraining the models on HuggingFace. We still need to add proper documentation, but roughly speaking:

  • required dependencies can be installed with pip install ".[training]" (or pip install "chronos[training] @ git+https://github.com/amazon-science/chronos-forecasting.git"
  • python scripts/training/train.py --help lists all available options
  • the config files in .scripts/training/config can be adapted by
    • changing data files to other GluonTS-compatible files (arrow format recommended for efficiency, but parquet and json lines also supported)
    • pointing to Chronos models (instead of the original T5), setting random_init: false, and adjusting learning rate and number of steps for fine-tuning

Happy training! cc @colemanhindes @Saeufer @HALF111 @TPF2017 @0xrushi @iganggang

@lostella
Copy link
Contributor

@Alonelymess
Copy link

I get this error when training chronos-t5-small:
ValueError: --tf32 requires Ampere or a newer GPU arch, cuda>=11 and torch>=1.7

@abdulfatir
Copy link
Contributor

@Alonelymess that means your GPU does not support TF32 floating point format. Please run training/fine-tuning with the --no-tf32 flag or set tf32 to false in your yaml config.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
FAQ Frequently asked question
Projects
None yet
Development

No branches or pull requests

9 participants