ChatPaper.aiChatPaper

审计机器人基础模型的具身式红队行动

Embodied Red Teaming for Auditing Robotic Foundation Models

November 27, 2024
作者: Sathwik Karnik, Zhang-Wei Hong, Nishant Abhangi, Yen-Chen Lin, Tsun-Hsuan Wang, Christophe Dupuy, Rahul Gupta, Pulkit Agrawal
cs.AI

摘要

语言条件的机器人模型有潜力使机器人能够根据自然语言指令执行各种任务。然而,评估它们的安全性和有效性仍然具有挑战性,因为很难测试单个任务可以被表达的所有不同方式。当前的基准测试存在两个关键限制:它们依赖于有限的人类生成的指令集,错过了许多具有挑战性的情况,并且仅关注任务性能,而不评估安全性,比如避免损坏。为了解决这些缺陷,我们引入了具有具体背景的红队行动(ERT),这是一种新的评估方法,用于生成多样化和具有挑战性的指令以测试这些模型。ERT利用自动化的红队技术与视觉语言模型(VLMs)结合,创建具有上下文背景的困难指令。实验结果显示,最先进的语言条件机器人模型在ERT生成的指令上失败或表现不安全,突显了当前基准测试在评估实际性能和安全性方面的缺陷。代码和视频可在以下网址找到:https://s-karnik.github.io/embodied-red-team-project-page。
English
Language-conditioned robot models have the potential to enable robots to perform a wide range of tasks based on natural language instructions. However, assessing their safety and effectiveness remains challenging because it is difficult to test all the different ways a single task can be phrased. Current benchmarks have two key limitations: they rely on a limited set of human-generated instructions, missing many challenging cases, and focus only on task performance without assessing safety, such as avoiding damage. To address these gaps, we introduce Embodied Red Teaming (ERT), a new evaluation method that generates diverse and challenging instructions to test these models. ERT uses automated red teaming techniques with Vision Language Models (VLMs) to create contextually grounded, difficult instructions. Experimental results show that state-of-the-art language-conditioned robot models fail or behave unsafely on ERT-generated instructions, underscoring the shortcomings of current benchmarks in evaluating real-world performance and safety. Code and videos are available at: https://s-karnik.github.io/embodied-red-team-project-page.

Summary

AI-Generated Summary

PDF12February 11, 2025