ChatPaper.aiChatPaper

SampleMix: A Sample-wise Pre-training Data Mixing Strategey by Coordinating Data Quality and Diversity

March 3, 2025
Authors: Xiangyu Xi, Deyang Kong, Jian Yang, Jiawei Yang, Zhengyu Chen, Wei Wang, Jingang Wang, Xunliang Cai, Shikun Zhang, Wei Ye
cs.AI

Abstract

Existing pretraining data mixing methods for large language models (LLMs) typically follow a domain-wise methodology, a top-down process that first determines domain weights and then performs uniform data sampling across each domain. However, these approaches neglect significant inter-domain overlaps and commonalities, failing to control the global diversity of the constructed training dataset. Further, uniform sampling within domains ignores fine-grained sample-specific features, potentially leading to suboptimal data distribution. To address these shortcomings, we propose a novel sample-wise data mixture approach based on a bottom-up paradigm. This method performs global cross-domain sampling by systematically evaluating the quality and diversity of each sample, thereby dynamically determining the optimal domain distribution. Comprehensive experiments across multiple downstream tasks and perplexity assessments demonstrate that SampleMix surpasses existing domain-based methods. Meanwhile, SampleMix requires 1.4x to 2.1x training steps to achieves the baselines' performance, highlighting the substantial potential of SampleMix to optimize pre-training data.

Summary

AI-Generated Summary

Paper Overview

Core Contribution

  • Proposes SampleMix, a novel sample-wise pre-training data mixing strategy for large language models (LLMs).
  • Addresses limitations of domain-wise mixing methods by focusing on sample quality and diversity.
  • Introduces a bottom-up paradigm for global cross-domain sampling, dynamically determining optimal domain distribution.

Research Context

  • Existing pretraining data mixing methods follow a domain-wise approach, neglecting inter-domain overlaps and sample-specific features.
  • SampleMix aims to optimize data distribution by evaluating quality and diversity at the sample level.

Keywords

  • Sample-wise data mixing
  • Pretraining data optimization
  • Quality and diversity coordination
  • Large language models (LLMs)
  • Bottom-up sampling

Background

Research Gap

  • Domain-wise methods ignore inter-domain overlaps and commonalities, leading to suboptimal global diversity.
  • Uniform sampling within domains fails to account for fine-grained sample-specific features.

Technical Challenges

  • Evaluating sample quality and diversity at scale.
  • Dynamically determining optimal domain distributions based on sample-level evaluations.
  • Balancing quality and diversity in data sampling.

Prior Approaches

  • Domain-wise methods: Determine domain weights and perform uniform sampling within domains.
  • Heuristic-based methods: Manually assign domain weights (e.g., upsampling high-quality datasets).
  • Learning-based methods: Train proxy models to generate optimal domain weights.

Methodology

Technical Architecture

  • Global cross-domain sampling based on sample quality and diversity.
  • Quality and diversity evaluators assign sampling weights to each sample.
  • Dynamic domain distribution based on sample-level evaluations.

Implementation Details

  • Quality evaluation: Seven dimensions (e.g., clarity, completeness, knowledge richness) scored using GPT-4o.
  • Diversity evaluation: Clustering-based approach using K-Means to measure cluster compactness and separation.
  • Sampling weight calculation: Combines normalized quality and diversity scores with a weighting factor (α).
  • Sampling frequency: Softmax-based distribution to determine sampling counts for each sample.

Innovation Points

  • Sample-wise approach: Focuses on individual samples rather than domains.
  • Bottom-up paradigm: Dynamically determines domain distributions based on sample-level evaluations.
  • Adaptive to token budgets: Adjusts sampling strategy based on varying token budget constraints.

Results

Experimental Setup

  • Dataset: SlimPajama (627B tokens, 7 domains).
  • Baselines: Vanilla, DoReMi, CE, BiMIX-OPT, DoGE, DML.
  • Model: 1B-parameter LLaMA models trained from scratch with 100B tokens.

Key Findings

  • SampleMix outperforms domain-wise methods across multiple downstream tasks and perplexity evaluations.
  • Achieves baseline accuracy with 1.4x to 2.1x fewer training steps, demonstrating higher efficiency.
  • Optimal performance achieved with α = 0.8, balancing quality and diversity.

Limitations

  • Hyperparameters optimized for SlimPajama may not generalize to other datasets.
  • Requires manual tuning of α for datasets with different quality and diversity characteristics.

Featured Papers

The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits

Shuming Ma, Hongyu Wang, Lingxiao Ma, Lei Wang, Wenhui Wang, Shaohan Huang, Li Dong, Ruiping Wang, Jilong Xue, Furu WeiFeb 27, 2024610142

Qwen2.5 Technical Report

Qwen, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, Zihan QiuDec 19, 202435211

DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning

DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue, Bingxuan Wang, Bochao Wu, Bei Feng, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Qu, Hui Li, Jianzhong Guo, Jiashi Li, Jiawei Wang, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, J. L. Cai, Jiaqi Ni, Jian Liang, Jin Chen, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Liang Zhao, Litong Wang, Liyue Zhang, Lei Xu, Leyi Xia, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Meng Li, Miaojun Wang, Mingming Li, Ning Tian, Panpan Huang, Peng Zhang, Qiancheng Wang, Qinyu Chen, Qiushi Du, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, R. J. Chen, R. L. Jin, Ruyi Chen, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shengfeng Ye, Shiyu Wang, Shuiping Yu, Shunfeng Zhou, Shuting Pan, S. S. Li, Shuang Zhou, Shaoqing Wu, Shengfeng Ye, Tao Yun, Tian Pei, Tianyu Sun, T. Wang, Wangding Zeng, Wanjia Zhao, Wen Liu, Wenfeng Liang, Wenjun Gao, Wenqin Yu, Wentao Zhang, W. L. Xiao, Wei An, Xiaodong Liu, Xiaohan Wang, Xiaokang Chen, Xiaotao Nie, Xin Cheng, Xin Liu, Xin Xie, Xingchao Liu, Xinyu Yang, Xinyuan Li, Xuecheng Su, Xuheng Lin, X. Q. Li, Xiangyue Jin, Xiaojin Shen, Xiaosha Chen, Xiaowen Sun, Xiaoxiang Wang, Xinnan Song, Xinyi Zhou, Xianzu Wang, Xinxia Shan, Y. K. Li, Y. Q. Wang, Y. X. Wei, Yang Zhang, Yanhong Xu, Yao Li, Yao Zhao, Yaofeng Sun, Yaohui Wang, Yi Yu, Yichao Zhang, Yifan Shi, Yiliang Xiong, Ying He, Yishi Piao, Yisong Wang, Yixuan Tan, Yiyang Ma, Yiyuan Liu, Yongqiang Guo, Yuan Ou, Yuduan Wang, Yue Gong, Yuheng Zou, Yujia He, Yunfan Xiong, Yuxiang Luo, Yuxiang You, Yuxuan Liu, Yuyang Zhou, Y. X. Zhu, Yanhong Xu, Yanping Huang, Yaohui Li, Yi Zheng, Yuchen Zhu, Yunxian Ma, Ying Tang, Yukun Zha, Yuting Yan, Z. Z. Ren, Zehui Ren, Zhangli Sha, Zhe Fu, Zhean Xu, Zhenda Xie, Zhengyan Zhang, Zhewen Hao, Zhicheng Ma, Zhigang Yan, Zhiyu Wu, Zihui Gu, Zijia Zhu, Zijun Liu, Zilin Li, Ziwei Xie, Ziyang Song, Zizheng Pan, Zhen Huang, Zhipeng Xu, Zhongyu Zhang, Zhen ZhangJan 22, 20253475

PDF92March 4, 2025