基於矛盾證據的檢索增強生成
Retrieval-Augmented Generation with Conflicting Evidence
April 17, 2025
作者: Han Wang, Archiki Prasad, Elias Stengel-Eskin, Mohit Bansal
cs.AI
摘要
大型語言模型(LLM)代理越來越多地採用檢索增強生成(RAG)技術來提升其回應的事實準確性。然而,在實際應用中,這些系統常常需要處理模糊的用戶查詢以及來自多個來源的潛在衝突信息,同時還需抑制來自噪聲或不相關文件的不準確信息。以往的研究通常孤立地探討和解決這些挑戰,每次只考慮一個方面,例如處理模糊性或對噪聲和錯誤信息的魯棒性。我們則同時考慮多個因素,提出了(i)RAMDocs(帶有模糊性和錯誤信息的文件檢索),這是一個模擬複雜且現實情境下用戶查詢衝突證據的新數據集,包括模糊性、錯誤信息和噪聲;以及(ii)MADAM-RAG,這是一種多代理方法,其中LLM代理在多輪討論中辯論答案的優劣,允許聚合器整理對應於消除模糊性實體的回應,同時丟棄錯誤信息和噪聲,從而共同處理多種衝突來源。我們在AmbigDocs(需要為模糊查詢呈現所有有效答案)上展示了MADAM-RAG的有效性,使用閉源和開源模型均優於強RAG基線,最高提升11.40%;在FaithEval(需要抑制錯誤信息)上,使用Llama3.3-70B-Instruct模型,我們最高提升了15.80%(絕對值)。此外,我們發現RAMDocs對現有的RAG基線構成了挑戰(Llama3.3-70B-Instruct僅獲得32.60的精確匹配分數)。儘管MADAM-RAG開始解決這些衝突因素,我們的分析表明,特別是在增加支持證據和錯誤信息的不平衡程度時,仍存在顯著的差距。
English
Large language model (LLM) agents are increasingly employing
retrieval-augmented generation (RAG) to improve the factuality of their
responses. However, in practice, these systems often need to handle ambiguous
user queries and potentially conflicting information from multiple sources
while also suppressing inaccurate information from noisy or irrelevant
documents. Prior work has generally studied and addressed these challenges in
isolation, considering only one aspect at a time, such as handling ambiguity or
robustness to noise and misinformation. We instead consider multiple factors
simultaneously, proposing (i) RAMDocs (Retrieval with Ambiguity and
Misinformation in Documents), a new dataset that simulates complex and
realistic scenarios for conflicting evidence for a user query, including
ambiguity, misinformation, and noise; and (ii) MADAM-RAG, a multi-agent
approach in which LLM agents debate over the merits of an answer over multiple
rounds, allowing an aggregator to collate responses corresponding to
disambiguated entities while discarding misinformation and noise, thereby
handling diverse sources of conflict jointly. We demonstrate the effectiveness
of MADAM-RAG using both closed and open-source models on AmbigDocs -- which
requires presenting all valid answers for ambiguous queries -- improving over
strong RAG baselines by up to 11.40% and on FaithEval -- which requires
suppressing misinformation -- where we improve by up to 15.80% (absolute) with
Llama3.3-70B-Instruct. Furthermore, we find that RAMDocs poses a challenge for
existing RAG baselines (Llama3.3-70B-Instruct only obtains 32.60 exact match
score). While MADAM-RAG begins to address these conflicting factors, our
analysis indicates that a substantial gap remains especially when increasing
the level of imbalance in supporting evidence and misinformation.Summary
AI-Generated Summary