LLMs在翻譯中迷失:M-ALERT揭示跨語言安全漏洞
LLMs Lost in Translation: M-ALERT uncovers Cross-Linguistic Safety Gaps
December 19, 2024
作者: Felix Friedrich, Simone Tedeschi, Patrick Schramowski, Manuel Brack, Roberto Navigli, Huu Nguyen, Bo Li, Kristian Kersting
cs.AI
摘要
在跨多種語言建立安全的大型語言模型(LLMs)對確保安全訪問和語言多樣性至關重要。為此,我們引入了M-ALERT,這是一個多語言基準,用於評估五種語言(英語、法語、德語、意大利語和西班牙語)中LLMs的安全性。M-ALERT每種語言包含15,000個高質量提示,總計75,000個,遵循詳細的ALERT分類法。我們對10種最先進的LLMs進行了廣泛實驗,突顯了語言特定安全性分析的重要性,揭示了模型在不同語言和類別中經常表現出顯著的安全性不一致性。例如,Llama3.2在意大利語的crime_tax類別中表現出高度的不安全性,但在其他語言中保持安全。在所有模型中都可以觀察到類似的差異。相反,某些類別,如substance_cannabis和crime_propaganda,在所有模型和語言中一致地觸發不安全的回應。這些發現強調了在LLMs中確保安全和負責任的使用跨多樣化用戶社群的需求。
English
Building safe Large Language Models (LLMs) across multiple languages is
essential in ensuring both safe access and linguistic diversity. To this end,
we introduce M-ALERT, a multilingual benchmark that evaluates the safety of
LLMs in five languages: English, French, German, Italian, and Spanish. M-ALERT
includes 15k high-quality prompts per language, totaling 75k, following the
detailed ALERT taxonomy. Our extensive experiments on 10 state-of-the-art LLMs
highlight the importance of language-specific safety analysis, revealing that
models often exhibit significant inconsistencies in safety across languages and
categories. For instance, Llama3.2 shows high unsafety in the category
crime_tax for Italian but remains safe in other languages. Similar differences
can be observed across all models. In contrast, certain categories, such as
substance_cannabis and crime_propaganda, consistently trigger unsafe responses
across models and languages. These findings underscore the need for robust
multilingual safety practices in LLMs to ensure safe and responsible usage
across diverse user communities.Summary
AI-Generated Summary