Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MODEL EVALUATION REQUEST] intfloat/multilingual-e5-large-instruct #387

Open
8 tasks done
KennethEnevoldsen opened this issue Apr 14, 2024 · 3 comments
Open
8 tasks done
Labels
model evaluation request Request to evaluate a model and add it to the leaderboard(s) small model (<7B) This model has less than 7B parameters, so can be evaluated on an RTX 4090 GPU or smaller.

Comments

@KennethEnevoldsen
Copy link
Collaborator

Model ID

intfloat/multilingual-e5-large-instruct

Model type

Encoder model (e.g., BERT)

Model languages

  • Danish
  • Swedish
  • Norwegian (Bokmål or Nynorsk)
  • Icelandic
  • Faroese
  • German
  • Dutch
  • English

Merged model

Not a merged model

@KennethEnevoldsen KennethEnevoldsen added the model evaluation request Request to evaluate a model and add it to the leaderboard(s) label Apr 14, 2024
@saattrupdan saattrupdan self-assigned this Apr 15, 2024
@saattrupdan
Copy link
Member

saattrupdan commented Apr 15, 2024

This raises an scandeval.exceptions.InvalidBenchmark: NaN value detected in model outputs, even with mixed precision disabled. exception. Sometimes mixed precision isn't disabled correctly, so I will try benchmarking it in full fp32, which sometimes handles the NaNs. It's weird though, as the model is in fp16.

@saattrupdan saattrupdan added the small model (<7B) This model has less than 7B parameters, so can be evaluated on an RTX 4090 GPU or smaller. label Apr 15, 2024
@saattrupdan
Copy link
Member

This raises an scandeval.exceptions.InvalidBenchmark: NaN value detected in model outputs, even with mixed precision disabled. exception. Sometimes mixed precision isn't disabled correctly, so I will try benchmarking it in full fp32, which sometimes handles the NaNs. It's weird though, as the model is in fp16.

This still happens with fp32.

@saattrupdan saattrupdan removed their assignment Apr 15, 2024
@saattrupdan
Copy link
Member

Created an issue for the bug: #389

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
model evaluation request Request to evaluate a model and add it to the leaderboard(s) small model (<7B) This model has less than 7B parameters, so can be evaluated on an RTX 4090 GPU or smaller.
Projects
None yet
Development

No branches or pull requests

2 participants