You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there a reson to use questions_torchhub_0_shot.jsonl instead of questions_torchhub_bm25.jsonl when evaluating the retriever with torchub.
This is what is mentioned in the documentation for evaluation script. The arguments seem to be the same as the one with zero shot, except that the retriever is also passed.
Is there a reson to use
questions_torchhub_0_shot.jsonl
instead ofquestions_torchhub_bm25.jsonl
when evaluating the retriever with torchub.This is what is mentioned in the documentation for evaluation script. The arguments seem to be the same as the one with zero shot, except that the retriever is also passed.
python get_llm_responses_retriever.py --retriever bm25 --model gpt-3.5-turbo --api_key $API_KEY --output_file gpt-3.5-turbo_torchhub_0_shot.jsonl --question_data eval-data/questions/torchhub/questions_torchhub_0_shot.jsonl --api_name torchhub --api_dataset ../data/api/torchhub_api.jsonl
The text was updated successfully, but these errors were encountered: