RetroLLM:赋能大型语言模型以检索生成过程中的细粒度证据

RetroLLM: Empowering Large Language Models to Retrieve Fine-grained Evidence within Generation

December 16, 2024
作者: Xiaoxi Li, Jiajie Jin, Yujia Zhou, Yongkang Wu, Zhonghua Li, Qi Ye, Zhicheng Dou
cs.AI

摘要

大型语言模型(LLMs)展现出卓越的生成能力,但常常出现幻觉。检索增强生成(RAG)通过整合外部知识提供了有效解决方案,但现有方法仍然面临一些限制:独立检索器的额外部署成本、来自检索文本块的冗余输入标记,以及检索和生成的联合优化不足。为了解决这些问题,我们提出了RetroLLM,这是一个统一的框架,将检索和生成整合为一个统一的过程,使LLMs能够直接从语料库中生成细粒度证据并进行受限解码。此外,为了减轻在受限证据生成过程中的错误修剪,我们引入了(1)分层FM-Index约束,生成受语料库约束的线索,以在生成证据之前识别相关文档子集,减少无关的解码空间;以及(2)前瞻性受限解码策略,考虑未来序列的相关性以提高证据准确性。对五个开放领域问答数据集的广泛实验表明,RetroLLM在领域内和领域外任务中均表现出优越性能。代码可在https://github.com/sunnynexus/RetroLLM 上找到。
English
Large language models (LLMs) exhibit remarkable generative capabilities but often suffer from hallucinations. Retrieval-augmented generation (RAG) offers an effective solution by incorporating external knowledge, but existing methods still face several limitations: additional deployment costs of separate retrievers, redundant input tokens from retrieved text chunks, and the lack of joint optimization of retrieval and generation. To address these issues, we propose RetroLLM, a unified framework that integrates retrieval and generation into a single, cohesive process, enabling LLMs to directly generate fine-grained evidence from the corpus with constrained decoding. Moreover, to mitigate false pruning in the process of constrained evidence generation, we introduce (1) hierarchical FM-Index constraints, which generate corpus-constrained clues to identify a subset of relevant documents before evidence generation, reducing irrelevant decoding space; and (2) a forward-looking constrained decoding strategy, which considers the relevance of future sequences to improve evidence accuracy. Extensive experiments on five open-domain QA datasets demonstrate RetroLLM's superior performance across both in-domain and out-of-domain tasks. The code is available at https://github.com/sunnynexus/RetroLLM.

Summary

AI-Generated Summary

PDF334December 17, 2024