SampleMix: A Sample-wise Pre-training Data Mixing Strategey by Coordinating Data Quality and Diversity
Abstract
Existing pretraining data mixing methods for large language models (LLMs) typically follow a domain-wise methodology, a top-down process that first determines domain weights and then performs uniform data sampling across each domain. However, these approaches neglect significant inter-domain overlaps and commonalities, failing to control the global diversity of the constructed training dataset. Further, uniform sampling within domains ignores fine-grained sample-specific features, potentially leading to suboptimal data distribution. To address these shortcomings, we propose a novel sample-wise data mixture approach based on a bottom-up paradigm. This method performs global cross-domain sampling by systematically evaluating the quality and diversity of each sample, thereby dynamically determining the optimal domain distribution. Comprehensive experiments across multiple downstream tasks and perplexity assessments demonstrate that SampleMix surpasses existing domain-based methods. Meanwhile, SampleMix requires 1.4x to 2.1x training steps to achieves the baselines' performance, highlighting the substantial potential of SampleMix to optimize pre-training data.
Summary
AI-Generated Summary
Paper Overview
Core Contribution
- Proposes SampleMix, a novel sample-wise pre-training data mixing strategy for large language models (LLMs).
- Addresses limitations of domain-wise mixing methods by focusing on sample quality and diversity.
- Introduces a bottom-up paradigm for global cross-domain sampling, dynamically determining optimal domain distribution.
Research Context
- Existing pretraining data mixing methods follow a domain-wise approach, neglecting inter-domain overlaps and sample-specific features.
- SampleMix aims to optimize data distribution by evaluating quality and diversity at the sample level.
Keywords
- Sample-wise data mixing
- Pretraining data optimization
- Quality and diversity coordination
- Large language models (LLMs)
- Bottom-up sampling
Background
Research Gap
- Domain-wise methods ignore inter-domain overlaps and commonalities, leading to suboptimal global diversity.
- Uniform sampling within domains fails to account for fine-grained sample-specific features.
Technical Challenges
- Evaluating sample quality and diversity at scale.
- Dynamically determining optimal domain distributions based on sample-level evaluations.
- Balancing quality and diversity in data sampling.
Prior Approaches
- Domain-wise methods: Determine domain weights and perform uniform sampling within domains.
- Heuristic-based methods: Manually assign domain weights (e.g., upsampling high-quality datasets).
- Learning-based methods: Train proxy models to generate optimal domain weights.
Methodology
Technical Architecture
- Global cross-domain sampling based on sample quality and diversity.
- Quality and diversity evaluators assign sampling weights to each sample.
- Dynamic domain distribution based on sample-level evaluations.
Implementation Details
- Quality evaluation: Seven dimensions (e.g., clarity, completeness, knowledge richness) scored using GPT-4o.
- Diversity evaluation: Clustering-based approach using K-Means to measure cluster compactness and separation.
- Sampling weight calculation: Combines normalized quality and diversity scores with a weighting factor (α).
- Sampling frequency: Softmax-based distribution to determine sampling counts for each sample.
Innovation Points
- Sample-wise approach: Focuses on individual samples rather than domains.
- Bottom-up paradigm: Dynamically determines domain distributions based on sample-level evaluations.
- Adaptive to token budgets: Adjusts sampling strategy based on varying token budget constraints.
Results
Experimental Setup
- Dataset: SlimPajama (627B tokens, 7 domains).
- Baselines: Vanilla, DoReMi, CE, BiMIX-OPT, DoGE, DML.
- Model: 1B-parameter LLaMA models trained from scratch with 100B tokens.
Key Findings
- SampleMix outperforms domain-wise methods across multiple downstream tasks and perplexity evaluations.
- Achieves baseline accuracy with 1.4x to 2.1x fewer training steps, demonstrating higher efficiency.
- Optimal performance achieved with α = 0.8, balancing quality and diversity.
Limitations
- Hyperparameters optimized for SlimPajama may not generalize to other datasets.
- Requires manual tuning of α for datasets with different quality and diversity characteristics.