RetroLLM:賦能大型語言模型以檢索生成過程中的細粒度證據

RetroLLM: Empowering Large Language Models to Retrieve Fine-grained Evidence within Generation

December 16, 2024
作者: Xiaoxi Li, Jiajie Jin, Yujia Zhou, Yongkang Wu, Zhonghua Li, Qi Ye, Zhicheng Dou
cs.AI

摘要

大型語言模型(LLMs)展現出卓越的生成能力,但常常受到幻覺的困擾。檢索增強生成(RAG)通過整合外部知識提供了一個有效的解決方案,但現有方法仍然面臨幾個限制:單獨檢索器的額外部署成本、從檢索的文本片段中獲取冗餘輸入標記,以及檢索和生成的聯合優化不足。為了應對這些問題,我們提出了RetroLLM,這是一個統一的框架,將檢索和生成整合為一個統一的過程,使LLMs能夠直接從語料庫中生成細粒度證據並進行受限解碼。此外,為了減輕在受限證據生成過程中的虛假修剪,我們引入了(1)層次FM-Index約束,生成受語料庫約束的線索以在生成證據之前識別相關文檔的子集,減少無關的解碼空間;以及(2)前瞻性受限解碼策略,考慮未來序列的相關性以提高證據的準確性。對五個開放領域QA數據集進行的大量實驗表明,RetroLLM在領域內和領域外任務中均表現優異。代碼可在https://github.com/sunnynexus/RetroLLM 上找到。
English
Large language models (LLMs) exhibit remarkable generative capabilities but often suffer from hallucinations. Retrieval-augmented generation (RAG) offers an effective solution by incorporating external knowledge, but existing methods still face several limitations: additional deployment costs of separate retrievers, redundant input tokens from retrieved text chunks, and the lack of joint optimization of retrieval and generation. To address these issues, we propose RetroLLM, a unified framework that integrates retrieval and generation into a single, cohesive process, enabling LLMs to directly generate fine-grained evidence from the corpus with constrained decoding. Moreover, to mitigate false pruning in the process of constrained evidence generation, we introduce (1) hierarchical FM-Index constraints, which generate corpus-constrained clues to identify a subset of relevant documents before evidence generation, reducing irrelevant decoding space; and (2) a forward-looking constrained decoding strategy, which considers the relevance of future sequences to improve evidence accuracy. Extensive experiments on five open-domain QA datasets demonstrate RetroLLM's superior performance across both in-domain and out-of-domain tasks. The code is available at https://github.com/sunnynexus/RetroLLM.

Summary

AI-Generated Summary

PDF334December 17, 2024