Skip to content
Nandan Thakur edited this page Jun 29, 2022 · 3 revisions

🍻 The BEIR Benchmark

Welcome to the official Wiki page of the BEIR benchmark. BEIR is a heterogeneous benchmark containing diverse IR tasks. It also provides a common and easy framework for evaluation of your NLP-based retrieval models within the benchmark.

This guide will equip you on how to effectively use the BEIR benchmark for your use-cases.

For more information, checkout our publications: