多层次语言模型文本摘要
Multi-LLM Text Summarization
December 20, 2024
作者: Jiangnan Fang, Cheng-Tse Liu, Jieun Kim, Yash Bhedaru, Ethan Liu, Nikhil Singh, Nedim Lipka, Puneet Mathur, Nesreen K. Ahmed, Franck Dernoncourt, Ryan A. Rossi, Hanieh Deilamsalehy
cs.AI
摘要
在这项工作中,我们提出了一个多LLM摘要框架,并研究了两种不同的多LLM策略,包括集中式和分散式。我们的多LLM摘要框架在每一轮对话中有两个基本重要的步骤:生成和评估。这些步骤根据我们使用的多LLM分散式或集中式摘要方法而有所不同。在我们的多LLM分散式和集中式策略中,我们有k个不同的LLM来生成文本的多样摘要。然而,在评估过程中,我们的多LLM集中式摘要方法利用单个LLM来评估摘要并选择最佳摘要,而在分散式多LLM摘要中使用k个LLM。总体而言,我们发现我们的多LLM摘要方法明显优于仅利用单个LLM的基准线,性能提高了最多3倍。这些结果表明了多LLM摘要方法的有效性。
English
In this work, we propose a Multi-LLM summarization framework, and investigate
two different multi-LLM strategies including centralized and decentralized. Our
multi-LLM summarization framework has two fundamentally important steps at each
round of conversation: generation and evaluation. These steps are different
depending on whether our multi-LLM decentralized summarization is used or
centralized. In both our multi-LLM decentralized and centralized strategies, we
have k different LLMs that generate diverse summaries of the text. However,
during evaluation, our multi-LLM centralized summarization approach leverages a
single LLM to evaluate the summaries and select the best one whereas k LLMs are
used for decentralized multi-LLM summarization. Overall, we find that our
multi-LLM summarization approaches significantly outperform the baselines that
leverage only a single LLM by up to 3x. These results indicate the
effectiveness of multi-LLM approaches for summarization.Summary
AI-Generated Summary