自動 RT:針對大型語言模型的紅隊自動越獄策略探索

Auto-RT: Automatic Jailbreak Strategy Exploration for Red-Teaming Large Language Models

January 3, 2025
作者: Yanjiang Liu, Shuhen Zhou, Yaojie Lu, Huijia Zhu, Weiqiang Wang, Hongyu Lin, Ben He, Xianpei Han, Le Sun
cs.AI

摘要

自動紅隊測試已成為揭示大型語言模型(LLMs)中漏洞的重要方法。然而,大多數現有方法專注於孤立的安全缺陷,限制了它們適應動態防禦並有效揭示複雜漏洞的能力。為應對這一挑戰,我們提出了Auto-RT,一個強化學習框架,通過惡意查詢自動探索和優化複雜攻擊策略,有效揭示安全漏洞。具體而言,我們引入了兩個關鍵機制來減少探索複雜性並改進策略優化:1)提前終止探索,通過專注於高潛攻擊策略來加速探索;和2)具有中間降級模型的漸進式獎勵跟踪算法,動態地將搜索軌跡細化為成功利用漏洞。在多種LLMs上進行的大量實驗表明,通過顯著提高探索效率並自動優化攻擊策略,Auto-RT檢測到更廣泛的漏洞範圍,實現了更快的檢測速度,成功率比現有方法高出16.63%。
English
Automated red-teaming has become a crucial approach for uncovering vulnerabilities in large language models (LLMs). However, most existing methods focus on isolated safety flaws, limiting their ability to adapt to dynamic defenses and uncover complex vulnerabilities efficiently. To address this challenge, we propose Auto-RT, a reinforcement learning framework that automatically explores and optimizes complex attack strategies to effectively uncover security vulnerabilities through malicious queries. Specifically, we introduce two key mechanisms to reduce exploration complexity and improve strategy optimization: 1) Early-terminated Exploration, which accelerate exploration by focusing on high-potential attack strategies; and 2) Progressive Reward Tracking algorithm with intermediate downgrade models, which dynamically refine the search trajectory toward successful vulnerability exploitation. Extensive experiments across diverse LLMs demonstrate that, by significantly improving exploration efficiency and automatically optimizing attack strategies, Auto-RT detects a boarder range of vulnerabilities, achieving a faster detection speed and 16.63\% higher success rates compared to existing methods.

Summary

AI-Generated Summary

PDF172January 7, 2025