逃离柏拉图洞穴:迈向3D与文本潜在空间的对齐
Escaping Plato's Cave: Towards the Alignment of 3D and Text Latent Spaces
March 7, 2025
作者: Souhail Hadgi, Luca Moschella, Andrea Santilli, Diego Gomez, Qixing Huang, Emanuele Rodolà, Simone Melzi, Maks Ovsjanikov
cs.AI
摘要
近期研究表明,当大规模训练时,单模态的二维视觉与文本编码器所学习到的特征虽源自不同表示,却展现出显著的结构共性。然而,三维编码器相对于其他模态的作用仍未被探索。此外,现有利用大规模数据集的三维基础模型,通常通过与来自其他表示的冻结编码器进行显式对齐目标来训练。本研究中,我们探讨了单模态三维编码器与基于文本特征空间之间后验对齐的可能性。我们发现,对单模态文本与三维编码器进行简单的训练后特征对齐,效果有限。随后,我们专注于提取相应特征空间的子空间,发现通过将学习到的表示投影到精心挑选的低维子空间上,对齐质量显著提升,从而在匹配与检索任务中提高了准确率。我们的分析进一步揭示了这些共享子空间的本质,它们大致区分了语义与几何数据表示。总体而言,本研究首次为训练后三维单模态与文本特征空间的对齐建立了基准,并凸显了三维数据相较于其他表示所共有的及独特的属性。
English
Recent works have shown that, when trained at scale, uni-modal 2D vision and
text encoders converge to learned features that share remarkable structural
properties, despite arising from different representations. However, the role
of 3D encoders with respect to other modalities remains unexplored.
Furthermore, existing 3D foundation models that leverage large datasets are
typically trained with explicit alignment objectives with respect to frozen
encoders from other representations. In this work, we investigate the
possibility of a posteriori alignment of representations obtained from
uni-modal 3D encoders compared to text-based feature spaces. We show that naive
post-training feature alignment of uni-modal text and 3D encoders results in
limited performance. We then focus on extracting subspaces of the corresponding
feature spaces and discover that by projecting learned representations onto
well-chosen lower-dimensional subspaces the quality of alignment becomes
significantly higher, leading to improved accuracy on matching and retrieval
tasks. Our analysis further sheds light on the nature of these shared
subspaces, which roughly separate between semantic and geometric data
representations. Overall, ours is the first work that helps to establish a
baseline for post-training alignment of 3D uni-modal and text feature spaces,
and helps to highlight both the shared and unique properties of 3D data
compared to other representations.Summary
AI-Generated Summary