ChatPaper.aiChatPaper

FactAlign:大型語言模型的長文事實對齊

FactAlign: Long-form Factuality Alignment of Large Language Models

October 2, 2024
作者: Chao-Wei Huang, Yun-Nung Chen
cs.AI

摘要

大型語言模型展示了作為下一代資訊存取引擎的重要潛力。然而,它們的可靠性受到幻覺和生成非事實內容的問題的影響。這在長篇回應中尤為棘手,因為評估和確保事實準確性是復雜的。本文通過提出FactAlign來填補這一空白,這是一個旨在增強LLM長篇回應事實性的新型對齊框架,同時保持其幫助性。我們引入fKTO,這是一種細粒度、句級對齊算法,擴展了Kahneman-Tversky Optimization (KTO) 對齊方法。FactAlign利用最近的自動事實性評估進展,利用細粒度的事實性評估來引導對齊過程。我們在開放領域提示和尋求信息的問題上進行的實驗表明,FactAlign顯著提高了LLM回應的事實準確性,同時也提高了其幫助性。進一步的分析表明,FactAlign能夠訓練LLM提供更多信息,同時不失事實精度,從而提高事實F1分數。我們的源代碼、數據集和訓練模型可在https://github.com/MiuLab/FactAlign 公開獲得。
English
Large language models have demonstrated significant potential as the next-generation information access engines. However, their reliability is hindered by issues of hallucination and generating non-factual content. This is particularly problematic in long-form responses, where assessing and ensuring factual accuracy is complex. In this paper, we address this gap by proposing FactAlign, a novel alignment framework designed to enhance the factuality of LLMs' long-form responses while maintaining their helpfulness. We introduce fKTO, a fine-grained, sentence-level alignment algorithm that extends the Kahneman-Tversky Optimization (KTO) alignment method. Leveraging recent advances in automatic factuality evaluation, FactAlign utilizes fine-grained factuality assessments to guide the alignment process. Our experiments on open-domain prompts and information-seeking questions demonstrate that FactAlign significantly improves the factual accuracy of LLM responses while also improving their helpfulness. Further analyses identify that FactAlign is capable of training LLMs to provide more information without losing factual precision, thus improving the factual F1 score. Our source code, datasets, and trained models are publicly available at https://github.com/MiuLab/FactAlign

Summary

AI-Generated Summary

PDF92November 16, 2024