ChatPaper.aiChatPaper

MinorBench:面向儿童内容风险的精心构建基准测试集

MinorBench: A hand-built benchmark for content-based risks for children

March 13, 2025
作者: Shaun Khoo, Gabriel Chua, Rachel Shong
cs.AI

摘要

大型语言模型(LLMs)正迅速融入儿童的生活——通过家长的引入、学校教育和同龄人网络——然而,当前的AI伦理与安全研究并未充分针对未成年人特有的内容相关风险。本文通过一个基于LLM的聊天机器人在中学环境中的实际案例研究,揭示了学生如何使用及有时滥用该系统,从而凸显了这些研究空白。基于这些发现,我们提出了一套针对未成年人的内容风险新分类,并推出了MinorBench,这是一个开源基准,旨在评估LLMs在拒绝儿童提出的不安全或不适当查询方面的能力。我们在不同系统提示下评估了六种主流LLMs,结果显示它们在儿童安全合规性上存在显著差异。我们的研究结果为构建更强大、以儿童为中心的安全机制提供了实践指导,并强调了定制AI系统以保护年轻用户的紧迫性。
English
Large Language Models (LLMs) are rapidly entering children's lives - through parent-driven adoption, schools, and peer networks - yet current AI ethics and safety research do not adequately address content-related risks specific to minors. In this paper, we highlight these gaps with a real-world case study of an LLM-based chatbot deployed in a middle school setting, revealing how students used and sometimes misused the system. Building on these findings, we propose a new taxonomy of content-based risks for minors and introduce MinorBench, an open-source benchmark designed to evaluate LLMs on their ability to refuse unsafe or inappropriate queries from children. We evaluate six prominent LLMs under different system prompts, demonstrating substantial variability in their child-safety compliance. Our results inform practical steps for more robust, child-focused safety mechanisms and underscore the urgency of tailoring AI systems to safeguard young users.

Summary

AI-Generated Summary

PDF21March 14, 2025