Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

we may want fine tuning as a separate option since we need to check more stuff.. #137

Closed
alinelena opened this issue May 4, 2024 · 4 comments · Fixed by #142
Closed

Comments

@alinelena
Copy link
Member

          we may want fine tuning as a separate option since we need to check more stuff..

also will need a restart

Originally posted by @alinelena in #127 (comment)

@ElliottKasoar
Copy link
Member

ElliottKasoar commented May 7, 2024

Should these checks not be handled by MACE's (or other MLIPs') training scripts?

We can check that foundation_model is passed, which triggers a few things including load_foundations from finetuning_utils.py, but otherwise would it not make more sense to contribute any more essential checks back to MACE, or are there things that would be specific to janus?

@alinelena
Copy link
Member Author

for example will be useful to check that the foundation model exists, yes I agree the more fundamental stuff shall go to mace.

@ElliottKasoar
Copy link
Member

ElliottKasoar commented May 7, 2024

for example will be useful to check that the foundation model exists, yes I agree the more fundamental stuff shall go to mace.

I'm not sure there's even a straight forward way of doing that, since the model loading is done within the training script e.g.

if args.foundation_model is not None:
    if args.foundation_model in ["small", "medium", "large"]:
        logging.info(
            f"Using foundation model mace-mp-0 {args.foundation_model} as initial checkpoint."
        )
        calc = mace_mp(
            model=args.foundation_model,
            device=args.device,
            default_dtype=args.default_dtype,
        )

Even if we loaded the model, we couldn't pass it explicitly, since we have to go via the config.

We could load it and pass the path to make use of the final else torch.load, but it does mean we miss any model-specific logic they include.

@ElliottKasoar
Copy link
Member

We could load it and pass the path to make use of the final else torch.load, but it does mean we miss any model-specific logic they include.

I suppose we will need to do this, or something similar, for aiida anyway though.

@ElliottKasoar ElliottKasoar linked a pull request May 10, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants