多層次語言模型文本摘要

Multi-LLM Text Summarization

December 20, 2024
作者: Jiangnan Fang, Cheng-Tse Liu, Jieun Kim, Yash Bhedaru, Ethan Liu, Nikhil Singh, Nedim Lipka, Puneet Mathur, Nesreen K. Ahmed, Franck Dernoncourt, Ryan A. Rossi, Hanieh Deilamsalehy
cs.AI

摘要

在這份工作中,我們提出了一個多LLM摘要框架,並研究了兩種不同的多LLM策略,包括集中式和分散式。我們的多LLM摘要框架在每輪對話中有兩個基本重要的步驟:生成和評估。這些步驟會根據我們使用的多LLM分散式或集中式摘要方法而有所不同。在我們的多LLM分散式和集中式策略中,我們有k個不同的LLM來生成文本的多樣摘要。然而,在評估過程中,我們的多LLM集中式摘要方法利用單個LLM來評估摘要並選擇最佳摘要,而在分散式多LLM摘要中則使用k個LLM。總的來說,我們發現我們的多LLM摘要方法在性能上顯著優於僅使用單個LLM的基準線,最多可提高3倍。這些結果表明了多LLM方法在摘要中的有效性。
English
In this work, we propose a Multi-LLM summarization framework, and investigate two different multi-LLM strategies including centralized and decentralized. Our multi-LLM summarization framework has two fundamentally important steps at each round of conversation: generation and evaluation. These steps are different depending on whether our multi-LLM decentralized summarization is used or centralized. In both our multi-LLM decentralized and centralized strategies, we have k different LLMs that generate diverse summaries of the text. However, during evaluation, our multi-LLM centralized summarization approach leverages a single LLM to evaluate the summaries and select the best one whereas k LLMs are used for decentralized multi-LLM summarization. Overall, we find that our multi-LLM summarization approaches significantly outperform the baselines that leverage only a single LLM by up to 3x. These results indicate the effectiveness of multi-LLM approaches for summarization.

Summary

AI-Generated Summary

PDF52December 23, 2024