You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am currently working with the Nequip framework for a project and have encountered an unexpected issue during the finetuning process. I would appreciate any insights or suggestions you have.
Issue Description:
I have been finetuning a model that was previously pre-trained. However, I noticed that the validation error during finetuning is consistently higher than the error observed during the pre-training phase. To investigate further, I set the validation set as the finetuning training set. Surprisingly, even under these conditions, the validation error (especially in the earliest epochs) of new training remains higher than it was in the pre-training.
Intuitively, when the training set is the same as the validation set, the validation error should be lower than the pre-training error.
I would like to understand whether this higher validation error observed during finetuning is a common result inherent to the algorithm itself or if it might be due to some improper settings on my part. If it's the latter, I am willing to provide my input and output files for further analysis.
Thank you very much for your time and assistance.
Best regards,
Ruoyu
The text was updated successfully, but these errors were encountered:
Hi Nequip Team,
I am currently working with the Nequip framework for a project and have encountered an unexpected issue during the finetuning process. I would appreciate any insights or suggestions you have.
Issue Description:
I have been finetuning a model that was previously pre-trained. However, I noticed that the validation error during finetuning is consistently higher than the error observed during the pre-training phase. To investigate further, I set the validation set as the finetuning training set. Surprisingly, even under these conditions, the validation error (especially in the earliest epochs) of new training remains higher than it was in the pre-training.
Intuitively, when the training set is the same as the validation set, the validation error should be lower than the pre-training error.
I would like to understand whether this higher validation error observed during finetuning is a common result inherent to the algorithm itself or if it might be due to some improper settings on my part. If it's the latter, I am willing to provide my input and output files for further analysis.
Thank you very much for your time and assistance.
Best regards,
Ruoyu
The text was updated successfully, but these errors were encountered: