ChatPaper.aiChatPaper

ReLearn:大型语言模型的反学习学习

ReLearn: Unlearning via Learning for Large Language Models

February 16, 2025
作者: Haoming Xu, Ningyuan Zhao, Liming Yang, Sendong Zhao, Shumin Deng, Mengru Wang, Bryan Hooi, Nay Oo, Huajun Chen, Ningyu Zhang
cs.AI

摘要

目前针对大型语言模型的遗忘方法通常依赖于反向优化来降低目标标记的概率。然而,这种范式会干扰后续标记的预测,降低模型性能和语言连贯性。此外,现有的评估指标过分强调上下文遗忘,同时未充分评估响应流畅性和相关性。为了解决这些挑战,我们提出了ReLearn,这是一个用于有效遗忘的数据增强和微调流程,以及一个全面的评估框架。该框架引入了知识遗忘率(KFR)和知识保留率(KRR)来衡量知识级别的保留,并引入了语言得分(LS)来评估生成质量。我们的实验表明,ReLearn成功实现了有针对性的遗忘,同时保留了高质量的输出。通过机械分析,我们进一步展示了反向优化如何干扰连贯的文本生成,而ReLearn保留了这一关键能力。代码可在https://github.com/zjunlp/unlearn找到。
English
Current unlearning methods for large language models usually rely on reverse optimization to reduce target token probabilities. However, this paradigm disrupts the subsequent tokens prediction, degrading model performance and linguistic coherence. Moreover, existing evaluation metrics overemphasize contextual forgetting while inadequately assessing response fluency and relevance. To address these challenges, we propose ReLearn, a data augmentation and fine-tuning pipeline for effective unlearning, along with a comprehensive evaluation framework. This framework introduces Knowledge Forgetting Rate (KFR) and Knowledge Retention Rate (KRR) to measure knowledge-level preservation, and Linguistic Score (LS) to evaluate generation quality. Our experiments show that ReLearn successfully achieves targeted forgetting while preserving high-quality output. Through mechanistic analysis, we further demonstrate how reverse optimization disrupts coherent text generation, while ReLearn preserves this essential capability. Code is available at https://github.com/zjunlp/unlearn.

Summary

AI-Generated Summary

PDF292February 18, 2025