CrowdSelect:基于多LLM智慧的合成指令数据筛选
CrowdSelect: Synthetic Instruction Data Selection with Multi-LLM Wisdom
March 3, 2025
作者: Yisen Li, Lingfeng Yang, Wenxuan Shen, Pan Zhou, Yao Wan, Weiwei Lin, Dongping Chen
cs.AI
摘要
将先进大型语言模型的指令跟随能力通过精选子集蒸馏至较小模型,已成为模型训练的主流方法。现有合成指令数据选择策略主要依赖单一维度信号(如奖励分数、模型困惑度),难以捕捉跨领域指令跟随的复杂性。因此,我们探索了更多样化的信号以全面捕捉指令-响应对特征,并提出了三项基础指标,这些指标利用多LLM智慧,基于(1)多样化的LLM响应和(2)奖励模型评估。在基础指标之上,我们提出了CrowdSelect,一个集成指标,采用基于聚类的方法以保持响应多样性。我们的全面实验表明,基础指标在MT-bench和Arena-Hard上持续提升了4个基础模型的性能。CrowdSelect高效整合所有指标,在完整微调和LoRA微调中均实现了最先进的性能,特别是在Llama-3.2-3b-instruct模型上,Arena-Hard提升了4.81%,MT-bench提升了11.1%。我们希望这些发现能为该领域的未来研究带来宝贵洞见。代码已发布于https://github.com/listentm/crowdselect。
English
Distilling advanced Large Language Models' instruction-following capabilities
into smaller models using a selected subset has become a mainstream approach in
model training. While existing synthetic instruction data selection strategies
rely mainly on single-dimensional signals (i.e., reward scores, model
perplexity), they fail to capture the complexity of instruction-following
across diverse fields. Therefore, we investigate more diverse signals to
capture comprehensive instruction-response pair characteristics and propose
three foundational metrics that leverage Multi-LLM wisdom, informed by (1)
diverse LLM responses and (2) reward model assessment. Building upon base
metrics, we propose CrowdSelect, an integrated metric incorporating a
clustering-based approach to maintain response diversity. Our comprehensive
experiments demonstrate that our foundation metrics consistently improve
performance across 4 base models on MT-bench and Arena-Hard. CrowdSelect,
efficiently incorporating all metrics, achieves state-of-the-art performance in
both Full and LoRA fine-tuning, showing improvements of 4.81% on Arena-Hard and
11.1% on MT-bench with Llama-3.2-3b-instruct. We hope our findings will bring
valuable insights for future research in this direction. Code are available at
https://github.com/listentm/crowdselect.Summary
AI-Generated Summary