ChatPaper.aiChatPaper

Tabby:基于语言模型的表格数据合成

Tabby: Tabular Data Synthesis with Language Models

March 4, 2025
作者: Sonia Cromp, Satya Sai Srinath Namburi GNVV, Mohammed Alkhudhayri, Catherine Cao, Samuel Guo, Nicholas Roberts, Frederic Sala
cs.AI

摘要

尽管近年来大型语言模型(LLMs)的进步极大提升了合成文本数据的质量,但表格数据的合成却相对较少受到关注。我们通过Tabby来解决这一差距,这是一种对标准Transformer语言模型架构进行简单却强大的训练后修改,使其能够用于表格数据集的合成。Tabby利用门控专家混合机制(Gated Mixture-of-Experts)来表示列间差异,并为每列配备特定的参数集。实验表明,Tabby生成的数据质量接近甚至等同于真实数据。通过将我们新颖的LLM表格训练技术Plain与Tabby结合,我们观察到数据质量相比之前方法提升了高达44%。此外,我们还展示了Tabby不仅限于表格数据,还能扩展到更一般的结构化数据,在一个嵌套的JSON数据集上也达到了与真实数据相当的水平。
English
While advances in large language models (LLMs) have greatly improved the quality of synthetic text data in recent years, synthesizing tabular data has received relatively less attention. We address this disparity with Tabby, a simple but powerful post-training modification to the standard Transformer language model architecture, enabling its use for tabular dataset synthesis. Tabby enables the representation of differences across columns using Gated Mixture-of-Experts, with column-specific sets of parameters. Empirically, Tabby results in data quality near or equal to that of real data. By pairing our novel LLM table training technique, Plain, with Tabby, we observe up to a 44% improvement in quality over previous methods. We also show that Tabby extends beyond tables to more general structured data, reaching parity with real data on a nested JSON dataset as well.

Summary

AI-Generated Summary

PDF42March 5, 2025