Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation Retrieval Task and TopiOCQA dataset #714

Merged
merged 12 commits into from
May 20, 2024

Conversation

vaibhavad
Copy link
Contributor

@vaibhavad vaibhavad commented May 14, 2024

Checklist for adding MMTEB dataset

Reason for dataset addition:

  • I have tested that the dataset runs with the mteb package.
  • I have run the following models on the task (adding the results to the pr). These can be run using the mteb run -m {model_name} -t {task_name} command.
    • sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
    • intfloat/multilingual-e5-small
  • I have checked that the performance is neither trivial (both models gain close to perfect scores) nor random (both models gain close to random scores).
  • If the dataset is too big (e.g. >2048 examples), considering using self.stratified_subsampling() under dataset_transform()
  • I have filled out the metadata object in the dataset file (find documentation on it here).
  • Run tests locally to make sure nothing is broken using make test.
  • Run the formatter to format the code using make lint.
  • I have added points for my submission to the points folder using the PR number as the filename (e.g. 438.jsonl).

@vaibhavad vaibhavad added the WIP Work In Progress label May 14, 2024
@vaibhavad vaibhavad marked this pull request as draft May 14, 2024 19:06
Copy link
Contributor

@orionw orionw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks really good overall, think it's almost there! Excited to have these added 🚀

mteb/evaluation/evaluators/RetrievalEvaluator.py Outdated Show resolved Hide resolved
mteb/evaluation/evaluators/RetrievalEvaluator.py Outdated Show resolved Hide resolved
mteb/evaluation/evaluators/RetrievalEvaluator.py Outdated Show resolved Hide resolved
@vaibhavad
Copy link
Contributor Author

Based on the current changes, here are the scenarios now:

  1. User provides just the encode and/or predict function: fallback to default implementation - history turns are concatenated for both bi-encoders and cross-encoders.
  2. User additionally provides encode_conversations function: converting conversation history to an embedding is done by the user provided function for bi-encoders. Cross-encoders still follow the same default implementation. i.e. history turn concatenation.
  3. User additionally provides convert_conv_history_to_query function: The user provided logic will be used in both bi-encoders and cross-encoders to convert conversation history to single input string.

Doing both 2. and 3. provides maximum flexibility and options to the user.

@orionw
Copy link
Contributor

orionw commented May 15, 2024

I like the plan to extend it into the cross-encoders! Can we collapse 2 and 3 into the same encode_conversations though? I think they can both call the same function which will be the default concatenate and if then the user can extend to either average embedding, provide a query rewriter, etc.

@vaibhavad
Copy link
Contributor Author

Can we collapse 2 and 3 into the same encode_conversations though?

I thought about that but there are a couple of issues here.

  1. The goal of encode_conversations is to take as input a conversation history and output an embedding. This is not relevant to cross-encoders as cross encoders will jointly attend to some text representation of the conversation and the document, hence they will need a method for that.
  2. If we use encode_conversations with two possible different output types (embedding for bi-encoder and text for cross-encoder), then users will not be able to submit one single model that runs on the entire benchamark.

What do you think about these concerns?

@orionw
Copy link
Contributor

orionw commented May 16, 2024

Summary from a brief chat we had, for note taking purposes.

We will keep both encode_conversations and convert_conv_history_to_query to allow for both averaging query embeddings and converting it to a text string (for cross-encoders).

I think all that's left is adding the results files and the points files @vaibhavad. Thanks for the help!

@vaibhavad vaibhavad removed the WIP Work In Progress label May 18, 2024
@vaibhavad vaibhavad marked this pull request as ready for review May 20, 2024 01:16
@vaibhavad
Copy link
Contributor Author

@orionw - This is ready to be merged! Thanks for all the support and feedback throughout the process

@KennethEnevoldsen KennethEnevoldsen requested review from KennethEnevoldsen and removed request for KennethEnevoldsen May 20, 2024 14:18
@orionw
Copy link
Contributor

orionw commented May 20, 2024

Thanks @vaibhavad! LGTM.

Two very minor comments/questions:

  • Feel free to give yourself an additional 3-5 ish points or so for adding the conversational retrieval features if you want, it can go in the bug fixes category. This is more than just adding a dataset :)
  • Are the retrieval numbers for TopiOCQA in the right ballpark for these relatively bad embedding models? Just wanted to check since nDCG is around 0.10 (which could be totally fine, but just wanted to confirm).

@vaibhavad
Copy link
Contributor Author

Thanks, I added 4 points under bug fixes. I am also working with @xhluca and conversational retrieval workflow may be changed slightly to include the role of the speaker as well (assistant, user etc), but it is best to take that up in a different PR.

Regarding the scores, they look okay to me. The corpus is 25 million passages, so the random score will be way worse than this. The models trained on conversational retrieval should get scores in 0.6-0.8 (ballpark). So 0.1 seems a reasonable score for a model which is not trained on this task but can still so some surface level matching. Also the first question of each conversation is a decontextualized questions, similar to many QA datasets that these models have been trained on. Hence, some performance may be coming from there.

Fell free to merge the branch :) Thanks again for all the help and support.

@orionw orionw self-requested a review May 20, 2024 20:25
@orionw orionw merged commit 140de21 into embeddings-benchmark:main May 20, 2024
7 checks passed
@xhluca
Copy link
Contributor

xhluca commented May 20, 2024

Awesome! I have a similar dataset (as @vaibhavad mentioned). Is it possible for me to request both of you (@orionw @vaibhavad) as reviewer for my next PR?

dokato pushed a commit to dokato/mteb that referenced this pull request May 24, 2024
…k#714)

* conv retrieval evaluation

* add encode_conversations to DenseRetrievalExactSearch

* remove duplication of default logic of conv -> query

* fix: queries has been converted from dict to list by this

* make it cross encoder compatible

* topiocqa dataset

* dont need a separate function

* metadata, full corpus

* more metadata

* baseline results

* add points

* update points
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants