ChatPaper.aiChatPaper

自学习自校正技术在小规模语言模型中的应用

Self-Taught Self-Correction for Small Language Models

March 11, 2025
作者: Viktor Moskvoretskii, Chris Biemann, Irina Nikishina
cs.AI

摘要

尽管大型语言模型(LLMs)在多项任务中展现了卓越性能,它们仍易出错。一个核心挑战在于如何使其具备自我纠错能力。以往研究多依赖外部工具或大型专有模型,而本工作则探索了通过仅使用自生成数据进行迭代微调,在小语言模型(SLMs)中实现自我校正。我们提出了自教导自我校正(STaSC)算法,该算法融合了多项算法设计选择。在问答任务上的实验结果表明,STaSC能有效学习自我校正,带来显著的性能提升。我们的分析进一步揭示了自我校正的机制,以及不同设计选择对学习动态和整体性能的影响。为支持未来研究,我们公开了用户友好的代码库和轻量级模型。
English
Although large language models (LLMs) have achieved remarkable performance across various tasks, they remain prone to errors. A key challenge is enabling them to self-correct. While prior research has relied on external tools or large proprietary models, this work explores self-correction in small language models (SLMs) through iterative fine-tuning using solely self-generated data. We introduce the Self-Taught Self-Correction (STaSC) algorithm, which incorporates multiple algorithmic design choices. Experimental results on a question-answering task demonstrate that STaSC effectively learns self-correction, leading to significant performance improvements. Our analysis further provides insights into the mechanisms of self-correction and the impact of different design choices on learning dynamics and overall performance. To support future research, we release our user-friendly codebase and lightweight models.

Summary

AI-Generated Summary

PDF62March 13, 2025