UltraIF:推动来自野外的指令跟踪
UltraIF: Advancing Instruction Following from the Wild
February 6, 2025
作者: Kaikai An, Li Sheng, Ganqu Cui, Shuzheng Si, Ning Ding, Yu Cheng, Baobao Chang
cs.AI
摘要
指导原则:现代大型语言模型(LLMs)已成为有用的助手,能够遵循指令。然而,如何驯服LLMs以执行复杂指令仍然是个谜,因为开源社区训练的模型与领先公司训练的模型之间存在巨大差距。为了弥合这一差距,我们提出了一种简单且可扩展的方法UltraIF,用于构建能够遵循复杂指令的LLMs,而且只使用开源数据。UltraIF首先将真实世界用户提示分解为更简单的查询、约束以及与约束相关的评估问题。然后,我们训练一个UltraComposer来组合与约束相关的提示和评估问题。这种提示组合器使我们能够合成复杂的指令,并通过评估问题过滤响应。在我们的实验中,我们首次成功地使LLaMA-3.1-8B-Base与其指令版本在5个指令遵循基准测试中保持同步,而且仅使用8B模型作为响应生成器和评估器,而没有任何基准信息。对齐的模型还在其他基准测试中取得了竞争性得分。此外,我们还展示了UltraIF可以通过自对齐进一步改进LLaMA-3.1-8B-Instruct,激发了该方法更广泛的用例。我们的代码将在https://github.com/kkk-an/UltraIF 上提供。
English
Instruction-following made modern large language models (LLMs) helpful
assistants. However, the key to taming LLMs on complex instructions remains
mysterious, for that there are huge gaps between models trained by open-source
community and those trained by leading companies. To bridge the gap, we propose
a simple and scalable approach UltraIF for building LLMs that can follow
complex instructions with open-source data. UltraIF first decomposes real-world
user prompts into simpler queries, constraints, and corresponding evaluation
questions for the constraints. Then, we train an UltraComposer to compose
constraint-associated prompts with evaluation questions. This prompt composer
allows us to synthesize complicated instructions as well as filter responses
with evaluation questions. In our experiment, for the first time, we
successfully align LLaMA-3.1-8B-Base to catch up with its instruct version on 5
instruction-following benchmarks without any benchmark information, using only
8B model as response generator and evaluator. The aligned model also achieved
competitive scores on other benchmarks. Moreover, we also show that UltraIF
could further improve LLaMA-3.1-8B-Instruct through self-alignment, motivating
broader use cases for the method. Our code will be available at
https://github.com/kkk-an/UltraIF.Summary
AI-Generated Summary