ChatPaper.aiChatPaper

新數據如何滲透大型語言模型的知識體系及其稀釋方法

How new data permeates LLM knowledge and how to dilute it

April 13, 2025
作者: Chen Sun, Renat Aksitov, Andrey Zhmoginov, Nolan Andrew Miller, Max Vladymyrov, Ulrich Rueckert, Been Kim, Mark Sandler
cs.AI

摘要

大型語言模型通過基於梯度的更新進行學習並持續學習,但新的信息片段如何影響現有知識,從而導致有益的泛化和有問題的幻覺,仍然知之甚少。我們證明,在學習新信息時,LLMs 表現出一種「啟動」效應:學習一個新的事實可能會導致模型在不相關的上下文中不恰當地應用該知識。為了系統地研究這一現象,我們引入了「Outlandish」,這是一個精心策劃的包含 1320 個多樣化文本樣本的數據集,旨在探測新知識如何滲透到 LLM 的現有知識庫中。使用該數據集,我們展示了學習新信息後的啟動程度可以通過測量學習前關鍵詞的標記概率來預測。這種關係在不同的模型架構(PALM-2、Gemma、Llama)、大小和訓練階段中均穩健成立。最後,我們開發了兩種新技術來調節新知識如何影響現有模型行為:(1) 一種「墊腳石」文本增強策略和 (2) 一種「忽略-k」更新修剪方法。這些方法將不良的啟動效應減少了 50-95%,同時保留了模型學習新信息的能力。我們的研究結果不僅提供了關於 LLMs 如何學習的實證見解,還提供了改進語言模型中知識插入特異性的實用工具。更多材料請訪問:https://sunchipsster1.github.io/projects/outlandish/
English
Large language models learn and continually learn through the accumulation of gradient-based updates, but how individual pieces of new information affect existing knowledge, leading to both beneficial generalization and problematic hallucination, remains poorly understood. We demonstrate that when learning new information, LLMs exhibit a "priming" effect: learning a new fact can cause the model to inappropriately apply that knowledge in unrelated contexts. To systematically study this phenomenon, we introduce "Outlandish," a carefully curated dataset of 1320 diverse text samples designed to probe how new knowledge permeates through an LLM's existing knowledge base. Using this dataset, we show that the degree of priming after learning new information can be predicted by measuring the token probability of key words before learning. This relationship holds robustly across different model architectures (PALM-2, Gemma, Llama), sizes, and training stages. Finally, we develop two novel techniques to modulate how new knowledge affects existing model behavior: (1) a ``stepping-stone'' text augmentation strategy and (2) an ``ignore-k'' update pruning method. These approaches reduce undesirable priming effects by 50-95\% while preserving the model's ability to learn new information. Our findings provide both empirical insights into how LLMs learn and practical tools for improving the specificity of knowledge insertion in language models. Further materials: https://sunchipsster1.github.io/projects/outlandish/

Summary

AI-Generated Summary

PDF52April 15, 2025