ZIP-FIT:通過基於壓縮的對齊實現無嵌入式數據選擇
ZIP-FIT: Embedding-Free Data Selection via Compression-Based Alignment
October 23, 2024
作者: Elyas Obbad, Iddah Mlauzi, Brando Miranda, Rylan Schaeffer, Kamal Obbad, Suhana Bedi, Sanmi Koyejo
cs.AI
摘要
資料選擇對於優化語言模型(LM)在特定任務上的表現至關重要,然而大多數現有方法未能有效考慮目標任務的分佈。目前的方法要麼完全忽略任務特定要求,要麼依賴無法捕捉像自動形式化或程式碼生成這類任務所需微妙模式的近似方法。考慮目標分佈的方法通常依賴簡單、有時會產生噪音的表示,如雜湊n-gram特徵,這可能導致碰撞並引入噪音。我們引入ZIP-FIT,一個使用gzip壓縮直接衡量潛在訓練資料與目標任務分佈之間對齊的資料選擇框架。在自動形式化和Python程式碼生成的廣泛評估中,ZIP-FIT明顯優於DSIR和D4等領先基準。在ZIP-FIT選擇的資料上訓練的模型,其交叉熵損失最多比基準快85.1%,顯示更好的任務對齊導致更有效率的學習。此外,ZIP-FIT的選擇速度最多比DSIR快65.8%,比D4快兩個數量級。值得注意的是,ZIP-FIT表明,較小但對齊良好的資料集通常優於較大但不夠針對的資料集,這表明少量高品質資料勝過大量低品質資料。我們的結果暗示,對於有效的領域適應,具有任務意識的資料選擇至關重要,而壓縮提供了一種衡量任務對齊的原則方法。通過顯示針對性資料選擇可以顯著提高任務特定性能,我們的工作為資料品質、任務對齊和模型學習效率之間的關係提供了新的見解。
English
Data selection is crucial for optimizing language model (LM) performance on
specific tasks, yet most existing methods fail to effectively consider the
target task distribution.
Current approaches either ignore task-specific requirements entirely or rely
on approximations that fail to capture the nuanced patterns needed for tasks
like Autoformalization or code generation.
Methods that do consider the target distribution often rely on simplistic,
sometimes noisy, representations, like hashed n-gram features, which can lead
to collisions and introduce noise.
We introduce ZIP-FIT, a data selection framework that uses gzip compression
to directly measure alignment between potential training data and the target
task distribution.
In extensive evaluations on Autoformalization and Python code generation,
ZIP-FIT significantly outperforms leading baselines like DSIR and D4.
Models trained on ZIP-FIT-selected data achieve their lowest cross-entropy
loss up to 85.1\% faster than baselines, demonstrating that better task
alignment leads to more efficient learning.
In addition, ZIP-FIT performs selection up to 65.8\% faster than DSIR and two
orders of magnitude faster than D4.
Notably, ZIP-FIT shows that smaller, well-aligned datasets often outperform
larger but less targeted ones, demonstrating that a small amount of higher
quality data is superior to a large amount of lower quality data.
Our results imply that task-aware data selection is crucial for efficient
domain adaptation, and that compression offers a principled way to measure task
alignment.
By showing that targeted data selection can dramatically improve
task-specific performance, our work provides new insights into the relationship
between data quality, task alignment, and model learning efficiency.Summary
AI-Generated Summary