ReZero:通過再試一次來提升大型語言模型的搜索能力
ReZero: Enhancing LLM search ability by trying one-more-time
April 15, 2025
作者: Alan Dao, Thinh Le
cs.AI
摘要
檢索增強生成(Retrieval-Augmented Generation, RAG)提升了大型語言模型(Large Language Model, LLM)在知識密集型任務中的表現,但其效能高度依賴於初始搜尋查詢的品質。現有方法通常採用強化學習(Reinforcement Learning, RL),主要聚焦於查詢的構建或對結果的推理,而未能明確鼓勵在搜尋失敗後的持續嘗試。我們提出ReZero(Retry-Zero),這是一種新穎的RL框架,直接獎勵在初次搜尋未果後重新嘗試查詢的行為。此機制激勵LLM探索替代查詢,而非過早終止搜尋。ReZero展現了顯著的改進,達到了46.88%的準確率,相較於25%的基準線。通過獎勵持續性,ReZero在初始查詢可能不足的複雜資訊尋求情境中,增強了LLM的魯棒性。
English
Retrieval-Augmented Generation (RAG) improves Large Language Model (LLM)
performance on knowledge-intensive tasks but depends heavily on initial search
query quality. Current methods, often using Reinforcement Learning (RL),
typically focus on query formulation or reasoning over results, without
explicitly encouraging persistence after a failed search. We introduce ReZero
(Retry-Zero), a novel RL framework that directly rewards the act of retrying a
search query following an initial unsuccessful attempt. This incentivizes the
LLM to explore alternative queries rather than prematurely halting. ReZero
demonstrates significant improvement, achieving 46.88% accuracy compared to a
25% baseline. By rewarding persistence, ReZero enhances LLM robustness in
complex information-seeking scenarios where initial queries may prove
insufficient.Summary
AI-Generated Summary