语义中心假说:语言模型在不同语言和形式间共享语义表征

The Semantic Hub Hypothesis: Language Models Share Semantic Representations Across Languages and Modalities

November 7, 2024
作者: Zhaofeng Wu, Xinyan Velocity Yu, Dani Yogatama, Jiasen Lu, Yoon Kim
cs.AI

摘要

现代语言模型能够处理来自不同语言和形式的输入。我们假设模型通过学习跨异构数据类型(例如不同语言和形式)的共享表示空间来获得这种能力,该空间将语义相似的输入放置在彼此附近,即使它们来自不同的形式/语言。我们将其称为语义中枢假设,这源自神经科学中的中枢-辐模型(Patterson等,2007),该模型认为人脑中的语义知识是通过一个跨模态的语义“中枢”来组织的,该中枢整合了来自各种形式特定“辐”区域的信息。我们首先展示了模型对不同语言中语义等效输入的表示在中间层中是相似的,并且可以通过模型的主导预训练语言使用对数几率镜头来解释这个空间。这种倾向也延伸到其他数据类型,包括算术表达式、代码和视听输入。在一个数据类型中对共享表示空间的干预也可以可预测地影响模型在其他数据类型中的输出,这表明这种共享表示空间不仅仅是在广泛数据上进行大规模训练的副产品,而是模型在输入处理过程中积极利用的东西。
English
Modern language models can process inputs across diverse languages and modalities. We hypothesize that models acquire this capability through learning a shared representation space across heterogeneous data types (e.g., different languages and modalities), which places semantically similar inputs near one another, even if they are from different modalities/languages. We term this the semantic hub hypothesis, following the hub-and-spoke model from neuroscience (Patterson et al., 2007) which posits that semantic knowledge in the human brain is organized through a transmodal semantic "hub" which integrates information from various modality-specific "spokes" regions. We first show that model representations for semantically equivalent inputs in different languages are similar in the intermediate layers, and that this space can be interpreted using the model's dominant pretraining language via the logit lens. This tendency extends to other data types, including arithmetic expressions, code, and visual/audio inputs. Interventions in the shared representation space in one data type also predictably affect model outputs in other data types, suggesting that this shared representations space is not simply a vestigial byproduct of large-scale training on broad data, but something that is actively utilized by the model during input processing.

Summary

AI-Generated Summary

PDF52November 14, 2024