Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

models to benchmark #437

Open
kjappelbaum opened this issue Feb 8, 2023 · 2 comments
Open

models to benchmark #437

kjappelbaum opened this issue Feb 8, 2023 · 2 comments

Comments

@kjappelbaum
Copy link
Owner

kjappelbaum commented Feb 8, 2023

@FMcil
Copy link
Contributor

FMcil commented Mar 22, 2023

@kjappelbaum I was trying to think of the best way to benchmark moftransformer. It's pretrained on some tasks, is it a requirement to ensure that pretraining was not performed on leadboard test MOFs? Even in the case where leadboard tasks are very different to the pretraining tasks?

@kjappelbaum
Copy link
Owner Author

it is on my ToDo's #417

The reason I didn't do it so far is (indeed) that I think one needs to be a bit more careful with hyperparameter optimization and pertaining.
At least the hyperparameter optimization should happen within the cross-validation loop of the benchmark.
De-duplicating the pre-training dataset would be nice, but it is probably not as relevant as being careful with the hyperopt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants