ChatPaper.aiChatPaper

DELIFT:数据高效语言模型指令微调

DELIFT: Data Efficient Language model Instruction Fine Tuning

November 7, 2024
作者: Ishika Agarwal, Krishna Killamsetty, Lucian Popa, Marina Danilevksy
cs.AI

摘要

对大型语言模型(LLMs)进行微调对于增强其在特定任务上的性能至关重要,但通常由于冗余或无信息价值的数据而需要耗费大量资源。为了解决这种低效问题,我们引入了DELIFT(Data Efficient Language model Instruction Fine-Tuning),这是一种新颖的算法,系统地优化了微调的三个关键阶段中的数据选择:(1)指导微调,(2)任务特定微调(例如,推理,问答),以及(3)持续微调(例如,整合新数据版本)。与现有方法不同,这些方法侧重于单阶段优化或依赖计算密集型的梯度计算,DELIFT在所有阶段都能高效运行。我们方法的核心是一种成对效用度量,量化了数据样本对于改善模型对其他样本的响应有多有益,有效地衡量了信息价值相对于模型当前能力的情况。通过利用应用于该度量的不同子模块函数,DELIFT选择多样化和最佳子集,这些子集在微调的所有阶段都是有用的。在各种任务和模型规模上的实验表明,DELIFT可以将微调数据规模减少高达70%,而不会影响性能,提供了显著的计算节省,并在效率和功效方面优于现有方法。
English
Fine-tuning large language models (LLMs) is essential for enhancing their performance on specific tasks but is often resource-intensive due to redundant or uninformative data. To address this inefficiency, we introduce DELIFT (Data Efficient Language model Instruction Fine-Tuning), a novel algorithm that systematically optimizes data selection across the three key stages of fine-tuning: (1) instruction tuning, (2) task-specific fine-tuning (e.g., reasoning, question-answering), and (3) continual fine-tuning (e.g., incorporating new data versions). Unlike existing methods that focus on single-stage optimization or rely on computationally intensive gradient calculations, DELIFT operates efficiently across all stages. Central to our approach is a pairwise utility metric that quantifies how beneficial a data sample is for improving the model's responses to other samples, effectively measuring the informational value relative to the model's current capabilities. By leveraging different submodular functions applied to this metric, DELIFT selects diverse and optimal subsets that are useful across all stages of fine-tuning. Experiments across various tasks and model scales demonstrate that DELIFT can reduce the fine-tuning data size by up to 70% without compromising performance, offering significant computational savings and outperforming existing methods in both efficiency and efficacy.

Summary

AI-Generated Summary

PDF113November 14, 2024