MetaSC:面向语言模型的测试时间安全规范优化
MetaSC: Test-Time Safety Specification Optimization for Language Models
February 11, 2025
作者: Víctor Gallego
cs.AI
摘要
我们提出了一种新颖的动态安全框架,可以在推理时优化语言模型(LM)的安全推理,而无需修改模型权重。借鉴最近自我批评方法的进展,我们的方法利用元批评机制,通过迭代更新安全提示(称为规范)来驱动自我批评和修订过程,实现自适应性。这种测试时优化不仅提高了对抗性越狱请求的性能,还在各种一般安全相关任务中表现出色,如避免道德伤害或追求诚实回应。我们对多个语言模型进行的实证评估表明,动态优化的安全提示相比固定系统提示和静态自我批评防御能够显著提高安全评分。代码将在 https://github.com/vicgalle/meta-self-critique.git 上发布。
English
We propose a novel dynamic safety framework that optimizes language model
(LM) safety reasoning at inference time without modifying model weights.
Building on recent advances in self-critique methods, our approach leverages a
meta-critique mechanism that iteratively updates safety prompts-termed
specifications-to drive the critique and revision process adaptively. This
test-time optimization not only improves performance against adversarial
jailbreak requests but also in diverse general safety-related tasks, such as
avoiding moral harm or pursuing honest responses. Our empirical evaluations
across several language models demonstrate that dynamically optimized safety
prompts yield significantly higher safety scores compared to fixed system
prompts and static self-critique defenses. Code to be released at
https://github.com/vicgalle/meta-self-critique.git .Summary
AI-Generated Summary