ChatPaper.aiChatPaper

无模型退化的终身序列知识编辑

Lifelong Sequential Knowledge Editing without Model Degradation

February 3, 2025
作者: Akshat Gupta, Phudish Prateepamornkul, Maochuan Lu, Ahmed Alaa, Thomas Hartvigsen, Gopala Anumanchipalli
cs.AI

摘要

在参数修改知识编辑的先前研究中发现,大规模的顺序编辑会导致模型显著退化。本文研究了这背后的原因,并将顺序知识编辑扩展到10,000次连续编辑,同时保持原始模型的下游性能。我们首先展示了定位-编辑知识编辑方法会导致对编辑事实的过拟合。我们还展示了使用这些方法进行连续知识编辑会导致编辑矩阵范数的不成比例增长。然后,我们深入探讨了定位-编辑方法的内部运作机制。我们表明,范数增长是这些方法使用的隐藏技巧,使得从编辑层产生的输出激活更加重要。通过这种“重要性黑客”,编辑层对模型输出的贡献更大。为了缓解这些问题,我们提出了ENCORE - 提前停止和范数约束的稳健知识编辑。ENCORE 控制了过拟合和不成比例的范数增长,实现了长期连续编辑,我们能够进行多达10,000次顺序编辑而不降低下游性能。ENCORE 在Llama3-8B上比MEMIT快61%,比AlphaEdit快64%。
English
Prior work in parameter-modifying knowledge editing has shown that large-scale sequential editing leads to significant model degradation. In this paper, we study the reasons behind this and scale sequential knowledge editing to 10,000 sequential edits, while maintaining the downstream performance of the original model. We first show that locate-then-edit knowledge editing methods lead to overfitting on the edited facts. We also show that continuous knowledge editing using these methods leads to disproportionate growth in the norm of the edited matrix. We then provide a crucial insight into the inner workings of locate-then-edit methods. We show that norm-growth is a hidden trick employed by these methods that gives larger importance to the output activations produced from the edited layers. With this "importance hacking", the edited layers provide a much larger contributions to the model's output. To mitigate these issues, we present ENCORE - Early stopping and Norm-Constrained Robust knowledge Editing. ENCORE controls for overfitting and the disproportionate norm-growth to enable long-term sequential editing, where we are able to perform up to 10,000 sequential edits without loss of downstream performance. ENCORE is also 61% faster than MEMIT and 64% faster than AlphaEdit on Llama3-8B.

Summary

AI-Generated Summary

PDF52February 4, 2025