基于矛盾证据的检索增强生成
Retrieval-Augmented Generation with Conflicting Evidence
April 17, 2025
作者: Han Wang, Archiki Prasad, Elias Stengel-Eskin, Mohit Bansal
cs.AI
摘要
大型语言模型(LLM)代理正日益采用检索增强生成(RAG)技术,以提升其回答的事实准确性。然而,在实际应用中,这些系统常需应对用户查询的模糊性及来自多源信息的潜在冲突,同时还需抑制来自噪声或无关文档的不准确信息。以往的研究通常孤立地探讨并解决这些挑战,每次仅考虑一个方面,如处理模糊性或增强对噪声与错误信息的鲁棒性。我们则同时考虑多重因素,提出:(i) RAMDocs(文档中的模糊性与错误信息检索),这是一个新数据集,模拟了用户查询中复杂且现实的证据冲突场景,包括模糊性、错误信息及噪声;(ii) MADAM-RAG,一种多代理方法,其中LLM代理在多轮辩论中评估答案的优劣,使聚合器能够整理对应于消歧实体的回答,同时摒弃错误信息与噪声,从而共同处理多样化的冲突来源。我们通过在AmbigDocs(要求为模糊查询呈现所有有效答案)上使用闭源与开源模型验证了MADAM-RAG的有效性,相较于强RAG基线提升了高达11.40%;在FaithEval(要求抑制错误信息)上,使用Llama3.3-70B-Instruct模型时,我们实现了高达15.80%(绝对值)的提升。此外,我们发现RAMDocs对现有RAG基线构成了挑战(Llama3.3-70B-Instruct仅获得32.60的精确匹配分数)。尽管MADAM-RAG开始着手解决这些冲突因素,但我们的分析表明,尤其是在支持证据与错误信息的不平衡程度增加时,仍存在显著差距。
English
Large language model (LLM) agents are increasingly employing
retrieval-augmented generation (RAG) to improve the factuality of their
responses. However, in practice, these systems often need to handle ambiguous
user queries and potentially conflicting information from multiple sources
while also suppressing inaccurate information from noisy or irrelevant
documents. Prior work has generally studied and addressed these challenges in
isolation, considering only one aspect at a time, such as handling ambiguity or
robustness to noise and misinformation. We instead consider multiple factors
simultaneously, proposing (i) RAMDocs (Retrieval with Ambiguity and
Misinformation in Documents), a new dataset that simulates complex and
realistic scenarios for conflicting evidence for a user query, including
ambiguity, misinformation, and noise; and (ii) MADAM-RAG, a multi-agent
approach in which LLM agents debate over the merits of an answer over multiple
rounds, allowing an aggregator to collate responses corresponding to
disambiguated entities while discarding misinformation and noise, thereby
handling diverse sources of conflict jointly. We demonstrate the effectiveness
of MADAM-RAG using both closed and open-source models on AmbigDocs -- which
requires presenting all valid answers for ambiguous queries -- improving over
strong RAG baselines by up to 11.40% and on FaithEval -- which requires
suppressing misinformation -- where we improve by up to 15.80% (absolute) with
Llama3.3-70B-Instruct. Furthermore, we find that RAMDocs poses a challenge for
existing RAG baselines (Llama3.3-70B-Instruct only obtains 32.60 exact match
score). While MADAM-RAG begins to address these conflicting factors, our
analysis indicates that a substantial gap remains especially when increasing
the level of imbalance in supporting evidence and misinformation.Summary
AI-Generated Summary