ChatPaper.aiChatPaper

微分變壓器

Differential Transformer

October 7, 2024
作者: Tianzhu Ye, Li Dong, Yuqing Xia, Yutao Sun, Yi Zhu, Gao Huang, Furu Wei
cs.AI

摘要

Transformer 傾向於對無關上下文過度分配注意力。在這項研究中,我們引入了 Diff Transformer,該模型在放大相關上下文的同時抑制噪音。具體來說,差分注意力機制計算注意力分數作為兩個獨立 softmax 注意力地圖之間的差異。減法取消噪音,促進稀疏注意力模式的出現。在語言建模的實驗結果中顯示,Diff Transformer 在不同模型尺寸擴展和訓練標記的設置中優於 Transformer。更有趣的是,它在實際應用中提供了顯著的優勢,如長上下文建模、關鍵信息檢索、幻覺抑制、上下文學習和激活值的減少。由於對無關上下文的干擾較少,Diff Transformer 可以減輕問答和文本摘要中的幻覺。對於上下文學習,Diff Transformer 不僅提高了準確性,而且對於順序排列更為堅固,這被認為是一個長期的穩健性問題。這些結果將 Diff Transformer 定位為推進大型語言模型的高效且有前景的架構。
English
Transformer tends to overallocate attention to irrelevant context. In this work, we introduce Diff Transformer, which amplifies attention to the relevant context while canceling noise. Specifically, the differential attention mechanism calculates attention scores as the difference between two separate softmax attention maps. The subtraction cancels noise, promoting the emergence of sparse attention patterns. Experimental results on language modeling show that Diff Transformer outperforms Transformer in various settings of scaling up model size and training tokens. More intriguingly, it offers notable advantages in practical applications, such as long-context modeling, key information retrieval, hallucination mitigation, in-context learning, and reduction of activation outliers. By being less distracted by irrelevant context, Diff Transformer can mitigate hallucination in question answering and text summarization. For in-context learning, Diff Transformer not only enhances accuracy but is also more robust to order permutation, which was considered as a chronic robustness issue. The results position Diff Transformer as a highly effective and promising architecture to advance large language models.

Summary

AI-Generated Summary

PDF17935November 16, 2024