Promptriever:指令訓練的檢索器可像語言模型一樣進行提示。
Promptriever: Instruction-Trained Retrievers Can Be Prompted Like Language Models
September 17, 2024
作者: Orion Weller, Benjamin Van Durme, Dawn Lawrie, Ashwin Paranjape, Yuhao Zhang, Jack Hessel
cs.AI
摘要
指令調整的語言模型(LM)能夠回應命令,提供比基本對應模型更自然的使用者界面。在這項工作中,我們提出了Promptriever,這是第一個能夠像LM一樣被提示的檢索模型。為了訓練Promptriever,我們從MS MARCO中精選並釋出了一個新的實例級指令訓練集,涵蓋了近500k個實例。Promptriever不僅在標準檢索任務上表現出色,而且能夠遵循指令。我們觀察到:(1)在遵循詳細相關性指令方面取得了巨大進展(+14.3 p-MRR / +3.1 nDCG on FollowIR),(2)在查詢+指令的詞彙選擇/措辭方面顯著提高了韌性(+12.9 Robustness@10 on InstructIR),以及(3)通過提示執行超參數搜索以可靠地提高檢索性能的能力(+1.4 BEIR的平均增加)。Promptriever展示了檢索模型可以通過提示在每個查詢的基礎上進行控制,為將來將LM提示技術與信息檢索相一致的工作奠定了基礎。
English
Instruction-tuned language models (LM) are able to respond to imperative
commands, providing a more natural user interface compared to their base
counterparts. In this work, we present Promptriever, the first retrieval model
able to be prompted like an LM. To train Promptriever, we curate and release a
new instance-level instruction training set from MS MARCO, spanning nearly 500k
instances. Promptriever not only achieves strong performance on standard
retrieval tasks, but also follows instructions. We observe: (1) large gains
(reaching SoTA) on following detailed relevance instructions (+14.3 p-MRR /
+3.1 nDCG on FollowIR), (2) significantly increased robustness to lexical
choices/phrasing in the query+instruction (+12.9 Robustness@10 on InstructIR),
and (3) the ability to perform hyperparameter search via prompting to reliably
improve retrieval performance (+1.4 average increase on BEIR). Promptriever
demonstrates that retrieval models can be controlled with prompts on a
per-query basis, setting the stage for future work aligning LM prompting
techniques with information retrieval.Summary
AI-Generated Summary