元塊化:透過邏輯感知學習高效文本分割
Meta-Chunking: Learning Efficient Text Segmentation via Logical Perception
October 16, 2024
作者: Jihao Zhao, Zhiyuan Ji, Pengnian Qi, Simin Niu, Bo Tang, Feiyu Xiong, Zhiyu Li
cs.AI
摘要
檢索增強生成(RAG)作為大型語言模型(LLMs)的一個可行補充,通常忽略了其流程中文本分塊的關鍵方面,這影響了知識密集任務的質量。本文介紹了“元分塊”概念,指的是句子和段落之間的一種細粒度,由段落內具有深層語言邏輯聯繫的句子集合組成。為實現元分塊,我們設計了兩種基於LLMs的策略:邊界抽樣分塊和困惑度分塊。前者利用LLMs對連續句子是否需要分割進行二元分類,根據邊界抽樣獲得的概率差異做出決策。後者通過分析困惑度分佈的特徵來精確識別文本分塊邊界。此外,考慮到不同文本的固有複雜性,我們提出了一種將元分塊與動態合併結合以實現細粒度和粗粒度文本分塊平衡的策略。在十一個數據集上進行的實驗表明,元分塊可以更有效地提高基於RAG的單跳和多跳問答的性能。例如,在2WikiMultihopQA數據集上,它的表現優於相似性分塊1.32,同時僅消耗45.8%的時間。我們的代碼可在https://github.com/IAAR-Shanghai/Meta-Chunking找到。
English
Retrieval-Augmented Generation (RAG), while serving as a viable complement to
large language models (LLMs), often overlooks the crucial aspect of text
chunking within its pipeline, which impacts the quality of knowledge-intensive
tasks. This paper introduces the concept of Meta-Chunking, which refers to a
granularity between sentences and paragraphs, consisting of a collection of
sentences within a paragraph that have deep linguistic logical connections. To
implement Meta-Chunking, we designed two strategies based on LLMs: Margin
Sampling Chunking and Perplexity Chunking. The former employs LLMs to perform
binary classification on whether consecutive sentences need to be segmented,
making decisions based on the probability difference obtained from margin
sampling. The latter precisely identifies text chunk boundaries by analyzing
the characteristics of perplexity distribution. Additionally, considering the
inherent complexity of different texts, we propose a strategy that combines
Meta-Chunking with dynamic merging to achieve a balance between fine-grained
and coarse-grained text chunking. Experiments conducted on eleven datasets
demonstrate that Meta-Chunking can more efficiently improve the performance of
single-hop and multi-hop question answering based on RAG. For instance, on the
2WikiMultihopQA dataset, it outperforms similarity chunking by 1.32 while only
consuming 45.8% of the time. Our code is available at
https://github.com/IAAR-Shanghai/Meta-Chunking.Summary
AI-Generated Summary