Hub
    Docs
Try for Free
BenchFlow
/
Loft
mirrored 9 minutes ago
Benchmark CardFiles and versionsLeaderboard

Badge

  • Hub
  • Contact
DiscordGitHubXLinkedIn
0

LOFT: A 1 Million+ Token Long-Context Benchmark

This repository houses the resources for LOFT, the Long Context Frontiers benchmark, introduced in the research paper Can Long-Context Language Models Subsume Retrieval, RAG, SQL, and More?. LOFT consists of 6 long-context task categories spanning retrieval, multi-hop compositional reasoning, and more, totaling 35 datasets and 4 modalities.

Installation

$ git clone git@github.com:google-deepmind/loft.git
$ cd loft/
$ pip install -r requirements.txt

Download Datasets and Prompts

The script below downloads all the LOFT datasets under BASE_DIR.

$ BASE_DIR=your-choice-of-directory
$ sh download.sh $BASE_DIR

Each dataset is also available from the links in the Datasets table. For a small subset, download.sh will additionally run preprocess.py, which infills the missing fields in the queries and corpus files. Once the download is completed, you will see the file structure as below:

$BASE_DIR
└── data
     ├── retrieval
     │   ├── arguana
     │   │   ├── 128k
     │   │   │   ├── corpus.jsonl
     │   │   │   ├── dev_queries.jsonl
     │   │   │   ├── few_shot_queries.jsonl
     │   │   │   └── test_queries.jsonl
     │   │   ├── 1m
     │   │   └── 32k
     │   ├── fever
     │   │   ├── ...
     │   ├── ...
     ├── rag
     ├── sql
     ├── icl
     └── mm

We also provide an example prompt in PROMPT_EXAMPLE.txt showing how Corpus-in-Context (CiC) prompting can be done for the text retrieval task.

Inference and Evaluation

We currently support using Gemini (e.g., gemini-1.5-flash-002) from VertexAI for inference. Please prepare your PROJECT_ID from Google Cloud. To run the inference with gemini-1.5-flash-002 and evaluate predictions:

BASE_DIR=$1
DATASET=$2
LENGTH="128k"
TASK_TYPE="retrieval"
SPLIT="dev"
PROMPT_TYPE="few_shot_with_cot"
PROMPT="${TASK_TYPE}_${DATASET}_${LENGTH}_${SPLIT}:${PROMPT_TYPE}"
echo "Prompt: ${PROMPT}"

mkdir -p ${BASE_DIR}/outputs/${TASK_TYPE}/${DATASET}/${LENGTH}
answer_file_extension="jsonl"

python run_inference.py \
    --prompt_name ${PROMPT} \
    --task_type ${TASK_TYPE} \
    --base_dir ${BASE_DIR} \
    --data_dir ${TASK_TYPE}/${DATASET}/${LENGTH} \
    --split ${SPLIT} \
    --context_length ${LENGTH} \
    --output_path ${BASE_DIR}/outputs/${TASK_TYPE}/${DATASET}/${LENGTH}/${SPLIT}_predictions.jsonl \
    --project_id ${PROJECT_ID} \
    --overwrite

python run_evaluation.py \
    --answer_file_path ${BASE_DIR}/data/${TASK_TYPE}/${DATASET}/${LENGTH}/dev_queries.${answer_file_extension} \
    --pred_file_path ${BASE_DIR}/outputs/${TASK_TYPE}/${DATASET}/${LENGTH}/${SPLIT}_predictions.jsonl \
    --task_type ${TASK_TYPE}

The same script can be found from infer_eval.sh. We provide example queries and predictions files in evaluation/example_predictions/. Each task_type outputs many different metric scores. To understand which task_type to use for each dataset and also to see the primary evaluation metric reported in the paper for each dataset, see the Datasets table.

Datasets

TaskDatasetDescriptionTask TypePrimary MetricInfilling Needed?Download
Text RetrievalArguAnaArgument Retrievalretrievalrecall@1-Link
Text RetrievalFEVERFact Checkingretrievalrecall@1-Link
Text RetrievalFIQAQuestion Answeringretrievalrecall@1✅Link
Text RetrievalMS MARCOWeb Searchretrievalrecall@1✅Link
Text RetrievalNQQuestion Answeringretrievalrecall@1-Link
Text RetrievalQuoraDuplication Detectionretrievalrecall@1✅Link
Text RetrievalSciFactCitation Predictionretrievalrecall@1-Link
Text RetrievalTouché-2020Argument Retrievalretrievalrecall@1✅Link
Text RetrievalTopiOCQAMulti-turn QAretrievalrecall@1-Link
Text RetrievalHotPotQAMulti-hop QAretrievalmrecall@2-Link
Text RetrievalMuSiQueMulti-hop QAretrievalmrecall@5-Link
Text RetrievalQAMPARIMulti-target QAretrievalmrecall@5-Link
Text RetrievalQUESTMulti-target QAretrievalmrecall@3-Link
Visual RetrievalFlickr30kImage Retrievalretrievalrecall@1✅Coming Soon
Visual RetrievalMS COCOImage Retrievalretrievalrecall@1✅Coming Soon
Visual RetrievalOVENImage-text Retrievalretrievalrecall@1-Link
Visual RetrievalMSR-VTTVideo Retrievalretrievalrecall@1✅Coming Soon
Audio RetrievalFLEURS-enAudio Retrievalretrievalrecall@1-Link
Audio RetrievalFLEURS-esAudio Retrievalretrievalrecall@1-Link
Audio RetrievalFLEURS-frAudio Retrievalretrievalrecall@1-Link
Audio RetrievalFLEURS-hiAudio Retrievalretrievalrecall@1-Link
Audio RetrievalFLEURS-zhAudio Retrievalretrievalrecall@1-Link
RAGNQQuestion Answeringragsubspan_em-Link
RAGTopiOCQAMulti-turn QAragsubspan_em-Link
RAGHotPotQAMulti-hop QAragsubspan_em-Link
RAGMuSiQueMulti-hop QAragsubspan_em-Link
RAGQAMPARIMulti-target QAmulti_value_ragsubspan_em-Link
RAGQUESTMulti-target QAmulti_value_ragsubspan_em-Link
SQLSpiderSingle-turn SQLsqlexec_acc-Link
SQLSParCMulti-turn SQLsqlexec_acc-Link
Many-Shot ICLBBH-dateMultiple-choice QAiclem-Link
Many-Shot ICLBBH-salientMultiple-choice QAiclem-Link
Many-Shot ICLBBH-tracking7Multiple-choice QAiclem-Link
Many-Shot ICLBBH-webMultiple-choice QAiclem-Link
Many-Shot ICLLIB-dialogueClassification--✅Coming Soon

LOFT-Hard Subset

From the experiments in our paper, we learned that Gemini 1.5 was already performing well on many LOFT datasets, but also it showed some headroom on other datasets. Hence, we recommend iterating on the following four datasets:

  • MuSiQue, QAMPARI, QUEST, Spider

Full datasets and inference are supported from the current OSS.

Past & Upcoming Releases

  • Remaining multi-modal data and inference.
  • Prompt conversion code (data => prompt).
  • Inference code and prompts for retrieval (10/25/24).
  • Evaluation code for ICL and some ICL and visual retrieval datasets (8/30/24).
  • Evaluation code for text tasks and code to regenerate some of the LOFT datasets (6/29/24).
  • Initial release with links to download many of the LOFT text datasets (6/20/24).

Citing this work

@article{Lee2024LongContext,
  title={Can Long-Context Language Models Subsume Retrieval, RAG, SQL, and More?},
  author={Jinhyuk Lee and Anthony Chen and Zhuyun Dai and Dheeru Dua and Devendra Singh Sachan and Michael Boratko and Yi Luan and Sébastien M. R. Arnold and Vincent Perot and Siddharth Dalmia and Hexiang Hu and Xudong Lin and Panupong Pasupat and Aida Amini and Jeremy R. Cole and Sebastian Riedel and Iftekhar Naim and Ming-Wei Chang and Kelvin Guu},
  journal={ArXiv},
  year={2024},
  volume={abs/2406.13121},
  url={https://arxiv.org/abs/2406.13121}
}

License and disclaimer

Copyright 2024 DeepMind Technologies Limited

All software is licensed under the Apache License, Version 2.0 (Apache 2.0); you may not use this file except in compliance with the Apache 2.0 license. You may obtain a copy of the Apache 2.0 license at: https://www.apache.org/licenses/LICENSE-2.0

All other materials are licensed under the Creative Commons Attribution 4.0 International License (CC-BY). You may obtain a copy of the CC-BY license at: https://creativecommons.org/licenses/by/4.0/legalcode

Individual tasks may be subject to copyright and licensing from their respective owners - please see individual download files for details.

Unless required by applicable law or agreed to in writing, all software and materials distributed here under the Apache 2.0 or CC-BY licenses are distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the licenses for the specific language governing permissions and limitations under those licenses.

This is not an official Google product.

Tags

long-context

Information

Organization

BenchFlow

Release Date

April 18, 2025

Github

GitHubhttps://github.com/google-deepmind/loft

Paper

https://arxiv.org/abs/2406.13121