CLASH:评估语言模型在多重视角下评判高风险困境的能力
CLASH: Evaluating Language Models on Judging High-Stakes Dilemmas from Multiple Perspectives
April 15, 2025
作者: Ayoung Lee, Ryan Sungmo Kwon, Peter Railton, Lu Wang
cs.AI
摘要
在涉及价值观冲突的高风险困境中做出抉择,即便对人类而言也颇具挑战,更不用说人工智能了。然而,先前评估大型语言模型(LLMs)在此类情境下推理能力的研究,大多局限于日常场景。为填补这一空白,本研究首先引入了CLASH(基于角色视角的高风险情境下LLM评估),这是一个精心构建的数据集,包含345个高影响力困境及3,795个体现多元价值观的个体视角。特别地,我们设计CLASH以支持研究先前工作中缺失的基于价值观的决策过程关键方面,包括理解决策中的矛盾心理与心理不适,以及捕捉角色视角中价值观的时间变化。通过对10个开放与封闭前沿模型的基准测试,我们揭示了几个关键发现:(1)即便是最强大的模型,如GPT-4o和Claude-Sonnet,在识别应存在矛盾心理的决策情境时,准确率不足50%,而在明确情境中表现显著更优。(2)虽然LLMs能合理预测人类标记的心理不适,但在理解涉及价值观转变的视角上表现不足,表明LLMs需提升对复杂价值观的推理能力。(3)实验还发现,LLMs的价值观偏好与其对特定价值观的可引导性之间存在显著相关性。(4)最后,与第一人称设定相比,LLMs在从第三方视角进行价值观推理时展现出更高的可引导性,尽管某些价值观组合在第一人称框架下能获得独特优势。
English
Navigating high-stakes dilemmas involving conflicting values is challenging
even for humans, let alone for AI. Yet prior work in evaluating the reasoning
capabilities of large language models (LLMs) in such situations has been
limited to everyday scenarios. To close this gap, this work first introduces
CLASH (Character perspective-based LLM Assessments in Situations with
High-stakes), a meticulously curated dataset consisting of 345 high-impact
dilemmas along with 3,795 individual perspectives of diverse values. In
particular, we design CLASH in a way to support the study of critical aspects
of value-based decision-making processes which are missing from prior work,
including understanding decision ambivalence and psychological discomfort as
well as capturing the temporal shifts of values in characters' perspectives. By
benchmarking 10 open and closed frontier models, we uncover several key
findings. (1) Even the strongest models, such as GPT-4o and Claude-Sonnet,
achieve less than 50% accuracy in identifying situations where the decision
should be ambivalent, while they perform significantly better in clear-cut
scenarios. (2) While LLMs reasonably predict psychological discomfort as marked
by human, they inadequately comprehend perspectives involving value shifts,
indicating a need for LLMs to reason over complex values. (3) Our experiments
also reveal a significant correlation between LLMs' value preferences and their
steerability towards a given value. (4) Finally, LLMs exhibit greater
steerability when engaged in value reasoning from a third-party perspective,
compared to a first-person setup, though certain value pairs benefit uniquely
from the first-person framing.Summary
AI-Generated Summary