ChatPaper.aiChatPaper

COIG-P:一个高质量、大规模的中文偏好数据集,用于与人类价值观对齐

COIG-P: A High-Quality and Large-Scale Chinese Preference Dataset for Alignment with Human Values

April 7, 2025
作者: M-A-P Team, Siwei Wu, Jincheng Ren, Xinrun Du, Shuyue Guo, Xingwei Qu, Yiming Liang, Jie Liu, Yunwen Li, Tianyu Zheng, Boyu Feng, Huaqing Yuan, Zenith Wang, Jiaheng Liu, Wenhao Huang, Chenglin Cai, Haoran Que, Jian Yang, Yuelin Bai, Zekun Moore Wang, Zhouliang Yu, Qunshu Lin, Ding Pan, Yuchen Jiang, Tiannan Wang, Wangchunshu Zhou, Shenzhi Wang, Xingyuan Bu, Minghao Liu, Guoyin Wang, Ge Zhang, Chenghua Lin
cs.AI

摘要

大语言模型(LLMs)与人类偏好的对齐已取得显著成功。然而,现有的中文偏好数据集受限于规模小、领域覆盖窄以及缺乏严格的数据验证。此外,依赖人工标注者进行指令和响应标注,极大地限制了人类偏好数据集的可扩展性。为解决这些挑战,我们设计了一个无需人工干预的基于LLM的中文偏好数据集标注流程。具体而言,我们爬取并精心筛选了92k条高质量中文查询,并利用15个主流LLM生成并评分优选-拒绝响应对。基于此,我们推出了COIG-P(中文开放指令通用偏好数据集),这是一个高质量、大规模的中文偏好数据集,包含1,009k对中文偏好数据,涵盖6个多样化领域:聊天、代码、数学、逻辑、小说和角色扮演。在COIG-P的基础上,为降低使用LLM进行评分的开销,我们训练了一个8B规模的中文奖励模型(CRM),并精心构建了中文奖励基准(CRBench)。基于AlignBench liu2024alignbenchbenchmarkingchinesealignment的评估结果显示,COIG-P显著优于其他中文偏好数据集,并为Qwen2/2.5和Infinity-Instruct-3M-0625模型系列分别带来了2%至12%的性能提升。CRBench上的结果表明,我们的CRM具备强大且稳健的评分能力。我们将其应用于COIG-P测试集部分的优选-拒绝响应对筛选,实验显示其在识别低质量样本方面与GPT-4o相当,同时保持了高效性和成本效益。我们的代码和数据已发布于https://github.com/multimodal-art-projection/COIG-P。
English
Aligning large language models (LLMs) with human preferences has achieved remarkable success. However, existing Chinese preference datasets are limited by small scale, narrow domain coverage, and lack of rigorous data validation. Additionally, the reliance on human annotators for instruction and response labeling significantly constrains the scalability of human preference datasets. To address these challenges, we design an LLM-based Chinese preference dataset annotation pipeline with no human intervention. Specifically, we crawled and carefully filtered 92k high-quality Chinese queries and employed 15 mainstream LLMs to generate and score chosen-rejected response pairs. Based on it, we introduce COIG-P (Chinese Open Instruction Generalist - Preference), a high-quality, large-scale Chinese preference dataset, comprises 1,009k Chinese preference pairs spanning 6 diverse domains: Chat, Code, Math, Logic, Novel, and Role. Building upon COIG-P, to reduce the overhead of using LLMs for scoring, we trained a 8B-sized Chinese Reward Model (CRM) and meticulously constructed a Chinese Reward Benchmark (CRBench). Evaluation results based on AlignBench liu2024alignbenchbenchmarkingchinesealignment show that that COIG-P significantly outperforms other Chinese preference datasets, and it brings significant performance improvements ranging from 2% to 12% for the Qwen2/2.5 and Infinity-Instruct-3M-0625 model series, respectively. The results on CRBench demonstrate that our CRM has a strong and robust scoring ability. We apply it to filter chosen-rejected response pairs in a test split of COIG-P, and our experiments show that it is comparable to GPT-4o in identifying low-quality samples while maintaining efficiency and cost-effectiveness. Our codes and data are released in https://github.com/multimodal-art-projection/COIG-P.

Summary

AI-Generated Summary

PDF442April 9, 2025