Hub
    Docs
Try for Free
BenchFlow
/
Beir
mirrored 5 minutes ago
Benchmark CardFiles and versionsLeaderboard

Badge

  • Hub
  • Contact
DiscordGitHubXLinkedIn
0

GitHub release Build License Open In Colab Downloads Open Source

Paper | Installation | Quick Example | Datasets | Wiki | Hugging Face

:beers: What is it?

BEIR is a heterogeneous benchmark containing diverse IR tasks. It also provides a common and easy framework for evaluation of your NLP-based retrieval models within the benchmark.

For an overview, checkout our new wiki page: https://github.com/beir-cellar/beir/wiki.

For models and datasets, checkout out Hugging Face (HF) page: https://huggingface.co/BeIR.

For Leaderboard, checkout out Eval AI page: https://eval.ai/web/challenges/challenge-page/1897.

For more information, checkout out our publications:

  • BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models (NeurIPS 2021, Datasets and Benchmarks Track)
  • Resources for Brewing BEIR: Reproducible Reference Models and an Official Leaderboard (SIGIR 2024 Resource Track)

:beers: Installation

Install via pip:

pip install beir

If you want to build from source, use:

$ git clone https://github.com/beir-cellar/beir.git
$ cd beir
$ pip install -e .

Tested with python versions 3.9+

:beers: Features

  • Preprocess your own IR dataset or use one of the already-preprocessed 17 benchmark datasets
  • Wide settings included, covers diverse benchmarks useful for both academia and industry
  • Evaluates well-known retrieval architectures (lexical, dense, sparse and reranking-based)
  • Add and evaluate your own model in a easy framework using different state-of-the-art evaluation metrics

:beers: Quick Example

For other example codes, please refer to our Examples and Tutorials Wiki page.

from beir import util, LoggingHandler
from beir.retrieval import models
from beir.datasets.data_loader import GenericDataLoader
from beir.retrieval.evaluation import EvaluateRetrieval
from beir.retrieval.search.dense import DenseRetrievalExactSearch as DRES

import logging
import pathlib, os

#### Just some code to print debug information to stdout
logging.basicConfig(format='%(asctime)s - %(message)s',
                    datefmt='%Y-%m-%d %H:%M:%S',
                    level=logging.INFO,
                    handlers=[LoggingHandler()])
#### /print debug information to stdout

#### Download scifact.zip dataset and unzip the dataset
dataset = "scifact"
url = f"https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/{dataset}.zip"
out_dir = os.path.join(pathlib.Path(__file__).parent.absolute(), "datasets")
data_path = util.download_and_unzip(url, out_dir)

#### Provide the data_path where scifact has been downloaded and unzipped
corpus, queries, qrels = GenericDataLoader(data_folder=data_path).load(split="test")

#### Load the SBERT model and retrieve using cosine-similarity
model = DRES(models.SentenceBERT("Alibaba-NLP/gte-modernbert-base"), batch_size=16)

### Or load models directly from HuggingFace
# model = DRES(models.HuggingFace(
#     "intfloat/e5-large-unsupervised",
#     max_length=512,
#     pooling="mean",
#     normalize=True,
#     prompts={"query": "query: ", "passage": "passage: "}), batch_size=16)

retriever = EvaluateRetrieval(model, score_function="cos_sim") # or "dot" for dot product
results = retriever.retrieve(corpus, queries)

#### Evaluate your model with NDCG@k, MAP@K, Recall@K and Precision@K  where k = [1,3,5,10,100,1000]
ndcg, _map, recall, precision = retriever.evaluate(qrels, results, retriever.k_values)
mrr = retriever.evaluate_custom(qrels, results, retriever.k_values, metric="mrr")

### If you want to save your results and runfile (useful for reranking)
results_dir = os.path.join(pathlib.Path(__file__).parent.absolute(), "results")
os.makedirs(results_dir, exist_ok=True)

#### Save the evaluation runfile & results
util.save_runfile(os.path.join(results_dir, f"{dataset}.run.trec"), results)
util.save_results(os.path.join(results_dir, f"{dataset}.json"), ndcg, _map, recall, precision, mrr)

:beers: Available Datasets

Command to generate md5hash using Terminal: md5sum filename.zip.

You can view all datasets available here or on Hugging Face.

DatasetWebsiteBEIR-NamePublic?TypeQueriesCorpusRel D/QDown-loadmd5
MSMARCOHomepagemsmarco✅train
dev
test
6,9808.84M1.1Link444067daf65d982533ea17ebd59501e4
TREC-COVIDHomepagetrec-covid✅test50171K493.5Linkce62140cb23feb9becf6270d0d1fe6d1
NFCorpusHomepagenfcorpus✅train
dev
test
3233.6K38.2Linka89dba18a62ef92f7d323ec890a0d38d
BioASQHomepagebioasq❌train
test
50014.91M4.7NoHow to Reproduce?
NQHomepagenq✅train
test
3,4522.68M1.2Linkd4d3d2e48787a744b6f6e691ff534307
HotpotQAHomepagehotpotqa✅train
dev
test
7,4055.23M2.0Linkf412724f78b0d91183a0e86805e16114
FiQA-2018Homepagefiqa✅train
dev
test
64857K2.6Link17918ed23cd04fb15047f73e6c3bd9d9
Signal-1M(RT)Homepagesignal1m❌test972.86M19.6NoHow to Reproduce?
TREC-NEWSHomepagetrec-news❌test57595K19.6NoHow to Reproduce?
Robust04Homepagerobust04❌test249528K69.9NoHow to Reproduce?
ArguAnaHomepagearguana✅test1,4068.67K1.0Link8ad3e3c2a5867cdced806d6503f29b99
Touche-2020Homepagewebis-touche2020✅test49382K19.0Link46f650ba5a527fc69e0a6521c5a23563
CQADupstackHomepagecqadupstack✅test13,145457K1.4Link4e41456d7df8ee7760a7f866133bda78
QuoraHomepagequora✅dev
test
10,000523K1.6Link18fb154900ba42a600f84b839c173167
DBPediaHomepagedbpedia-entity✅dev
test
4004.63M38.2Linkc2a39eb420a3164af735795df012ac2c
SCIDOCSHomepagescidocs✅test1,00025K4.9Link38121350fc3a4d2f48850f6aff52e4a9
FEVERHomepagefever✅train
dev
test
6,6665.42M1.2Link5a818580227bfb4b35bb6fa46d9b6c03
Climate-FEVERHomepageclimate-fever✅test1,5355.42M3.0Link8b66f0a9126c521bae2bde127b4dc99d
SciFactHomepagescifact✅train
test
3005K1.1Link5f7d1de60b170fc8027bb7898e2efca1

:beers: Additional Information

We also provide a variety of additional information in our Wiki page. Please refer to these pages for the following:

Quick Start

  • Installing BEIR
  • Examples and Tutorials

Datasets

  • Datasets Available
  • Multilingual Datasets
  • Load your Custom Dataset

Models

  • Models Available
  • Evaluate your Custom Model

Metrics

  • Metrics Available

Miscellaneous

  • BEIR Leaderboard
  • Couse Material on IR

:beers: Disclaimer

Similar to Tensorflow datasets or Hugging Face's datasets library, we just downloaded and prepared public datasets. We only distribute these datasets in a specific format, but we do not vouch for their quality or fairness, or claim that you have license to use the dataset. It remains the user's responsibility to determine whether you as a user have permission to use the dataset under the dataset's license and to cite the right owner of the dataset.

If you're a dataset owner and wish to update any part of it, or do not want your dataset to be included in this library, feel free to post an issue here or make a pull request!

If you're a dataset owner and wish to include your dataset or model in this library, feel free to post an issue here or make a pull request!

:beers: Citing & Authors

If you find this repository helpful, feel free to cite our publication BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models:

@inproceedings{
    thakur2021beir,
    title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
    author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
    booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
    year={2021},
    url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}

If you use any baseline score from the BEIR leaderboard, feel free to cite our publication Resources for Brewing BEIR: Reproducible Reference Models and an Official Leaderboard

@inproceedings{kamalloo:2024,
    author = {Kamalloo, Ehsan and Thakur, Nandan and Lassance, Carlos and Ma, Xueguang and Yang, Jheng-Hong and Lin, Jimmy},
    title = {Resources for Brewing BEIR: Reproducible Reference Models and Statistical Analyses},
    year = {2024},
    isbn = {9798400704314},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3626772.3657862},
    doi = {10.1145/3626772.3657862},
    abstract = {BEIR is a benchmark dataset originally designed for zero-shot evaluation of retrieval models across 18 different domain/task combinations. In recent years, we have witnessed the growing popularity of models based on representation learning, which naturally begs the question: How effective are these models when presented with queries and documents that differ from the training data? While BEIR was designed to answer this question, our work addresses two shortcomings that prevent the benchmark from achieving its full potential: First, the sophistication of modern neural methods and the complexity of current software infrastructure create barriers to entry for newcomers. To this end, we provide reproducible reference implementations that cover learned dense and sparse models. Second, comparisons on BEIR are performed by reducing scores from heterogeneous datasets into a single average that is difficult to interpret. To remedy this, we present meta-analyses focusing on effect sizes across datasets that are able to accurately quantify model differences. By addressing both shortcomings, our work facilitates future explorations in a range of interesting research questions.},
    booktitle = {Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval},
    pages = {1431–1440},
    numpages = {10},
    keywords = {domain generalization, evaluation, reproducibility},
    location = {Washington DC, USA},
    series = {SIGIR '24}
}

The main contributors of this repository are:

  • Nandan Thakur, Personal Website: thakur-nandan.gitub.io

Contact person: Nandan Thakur, nandant@gmail.com

Don't hesitate to send us an e-mail or report an issue, if something is broken (and it shouldn't be) or if you have further questions.

This repository contains experimental software and is published for the sole purpose of giving additional background details on the respective publication.

:beers: Collaboration

The BEIR Benchmark has been made possible due to a collaborative effort of the following universities and organizations:

  • UKP Lab, Technical University of Darmstadt
  • University of Waterloo
  • Hugging Face

:beers: Contributors

Thanks go to all these wonderful collaborations for their contribution towards the BEIR benchmark:


Nandan Thakur

Nils Reimers

Iryna Gurevych

Jimmy Lin

Andreas Rücklé

Abhishek Srivastava

Tags

retrieval

Information

Organization

BenchFlow

Release Date

April 18, 2025

Github

GitHubhttps://github.com/beir-cellar/beir

Paper

https://openreview.net/forum?id=wCu6T5xFjeJ