o3-mini与DeepSeek-R1:哪个更安全?
o3-mini vs DeepSeek-R1: Which One is Safer?
January 30, 2025
作者: Aitor Arrieta, Miriam Ugarte, Pablo Valle, José Antonio Parejo, Sergio Segura
cs.AI
摘要
DeepSeek-R1的出现标志着AI行业整体以及特别是LLMs迎来了一个转折点。它的能力在多项任务中表现出色,包括创造性思维、代码生成、数学和自动程序修复,似乎在执行成本上更低。然而,LLMs必须遵守一个重要的定性属性,即它们与安全和人类价值观的一致性。DeepSeek-R1的明显竞争对手是其美国对应物OpenAI的o3-mini模型,预计将在性能、安全性和成本方面设定高标准。本文对DeepSeek-R1(70b版本)和OpenAI的o3-mini(beta版本)的安全级别进行了系统评估。为此,我们利用我们最近发布的自动安全测试工具ASTRAL。通过利用这一工具,我们在两个模型上自动生成并系统执行了总共1260个不安全的测试输入。在对两个LLMs提供的结果进行半自动评估后,结果表明DeepSeek-R1相比OpenAI的o3-mini更不安全。根据我们的评估,DeepSeek-R1对执行的提示作出了不安全回应的比例为11.98%,而o3-mini仅为1.19%。
English
The irruption of DeepSeek-R1 constitutes a turning point for the AI industry
in general and the LLMs in particular. Its capabilities have demonstrated
outstanding performance in several tasks, including creative thinking, code
generation, maths and automated program repair, at apparently lower execution
cost. However, LLMs must adhere to an important qualitative property, i.e.,
their alignment with safety and human values. A clear competitor of DeepSeek-R1
is its American counterpart, OpenAI's o3-mini model, which is expected to set
high standards in terms of performance, safety and cost. In this paper we
conduct a systematic assessment of the safety level of both, DeepSeek-R1 (70b
version) and OpenAI's o3-mini (beta version). To this end, we make use of our
recently released automated safety testing tool, named ASTRAL. By leveraging
this tool, we automatically and systematically generate and execute a total of
1260 unsafe test inputs on both models. After conducting a semi-automated
assessment of the outcomes provided by both LLMs, the results indicate that
DeepSeek-R1 is highly unsafe as compared to OpenAI's o3-mini. Based on our
evaluation, DeepSeek-R1 answered unsafely to 11.98% of the executed prompts
whereas o3-mini only to 1.19%.Summary
AI-Generated Summary