Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't explain low results for some models #131

Open
NebelAI opened this issue Mar 16, 2023 · 1 comment
Open

Can't explain low results for some models #131

NebelAI opened this issue Mar 16, 2023 · 1 comment

Comments

@NebelAI
Copy link

NebelAI commented Mar 16, 2023

Hey,

it is the second time I encounter low results for specific models. In short, I once trained deepset/gbert-base with train_msmarco_v3_margin_MSE.py and it worked like a charm. Then I tried the large version (deepset/gbert-large) and all results created with evaluate_sbert.py were almost zero (NDCG@1/5/10/100/1000 = 0.001...). Again, the base model created good results.

Now I did the same with xlm-roberta-base which again created good results. Using microsoft/xlm-align results in bad results again. What do I miss here? Are some models not technically feasible?

@NouamaneTazi
Copy link
Contributor

Are you suspecting a problem in training or evaluation?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants