Patchification中的缩放定律:一幅图值得50,176个标记甚至更多。
Scaling Laws in Patchification: An Image Is Worth 50,176 Tokens And More
February 6, 2025
作者: Feng Wang, Yaodong Yu, Guoyizhe Wei, Wei Shao, Yuyin Zhou, Alan Yuille, Cihang Xie
cs.AI
摘要
自从引入了视觉Transformer(ViT)以来,分块化一直被视为普通视觉架构的一种实际图像标记方法。通过压缩图像的空间尺寸,这种方法可以有效地缩短标记序列并减少类似ViT的普通架构的计算成本。在这项工作中,我们旨在彻底研究基于分块化的压缩编码范式引起的信息丢失以及它如何影响视觉理解。我们进行了大量的分块大小缩放实验,并激动地观察到分块化中一个有趣的缩放规律:模型可以持续受益于较小的分块大小,并实现改进的预测性能,直到达到最小的1x1分块大小,即像素标记化。这一结论广泛适用于不同的视觉任务、各种输入尺度和不同架构,如ViT和最近的Mamba模型。此外,作为副产品,我们发现随着分块变小,面向任务的特定解码器头对于密集预测变得不那么关键。在实验中,我们成功地将视觉序列扩展到了一个异常长度的50,176个标记,使用基础大小的模型在ImageNet-1k基准测试上实现了竞争力强的84.6%的测试准确率。我们希望这项研究能为未来构建非压缩视觉模型的工作提供见解和理论基础。代码可在https://github.com/wangf3014/Patch_Scaling找到。
English
Since the introduction of Vision Transformer (ViT), patchification has long
been regarded as a de facto image tokenization approach for plain visual
architectures. By compressing the spatial size of images, this approach can
effectively shorten the token sequence and reduce the computational cost of
ViT-like plain architectures. In this work, we aim to thoroughly examine the
information loss caused by this patchification-based compressive encoding
paradigm and how it affects visual understanding. We conduct extensive patch
size scaling experiments and excitedly observe an intriguing scaling law in
patchification: the models can consistently benefit from decreased patch sizes
and attain improved predictive performance, until it reaches the minimum patch
size of 1x1, i.e., pixel tokenization. This conclusion is broadly applicable
across different vision tasks, various input scales, and diverse architectures
such as ViT and the recent Mamba models. Moreover, as a by-product, we discover
that with smaller patches, task-specific decoder heads become less critical for
dense prediction. In the experiments, we successfully scale up the visual
sequence to an exceptional length of 50,176 tokens, achieving a competitive
test accuracy of 84.6% with a base-sized model on the ImageNet-1k benchmark. We
hope this study can provide insights and theoretical foundations for future
works of building non-compressive vision models. Code is available at
https://github.com/wangf3014/Patch_Scaling.Summary
AI-Generated Summary