展示工作:事实核查员对可解释的自动事实核查的要求
Show Me the Work: Fact-Checkers' Requirements for Explainable Automated Fact-Checking
February 13, 2025
作者: Greta Warren, Irina Shklovski, Isabelle Augenstein
cs.AI
摘要
大型语言模型和生成式人工智能在在线媒体中的普及加大了对有效自动事实核查的需求,以协助事实核查人员应对不断增加和复杂化的错误信息。事实核查的复杂性要求自动事实核查系统提供解释,使事实核查人员能够审查其输出。然而,目前尚不清楚这些解释应如何与事实核查人员的决策和推理过程相一致,以便有效地融入其工作流程中。通过与事实核查专业人士进行半结构化访谈,我们弥合了这一差距:(i)描述了事实核查人员如何评估证据、做出决策并解释其过程;(ii)研究了事实核查人员如何实际使用自动化工具;以及 (iii)确定了自动事实核查工具对事实核查人员解释的需求。研究结果显示了未满足的解释需求,并确定了可复制的事实核查解释的重要标准,这些解释可以追踪模型的推理路径,引用具体证据,并突出不确定性和信息缺口。
English
The pervasiveness of large language models and generative AI in online media
has amplified the need for effective automated fact-checking to assist
fact-checkers in tackling the increasing volume and sophistication of
misinformation. The complex nature of fact-checking demands that automated
fact-checking systems provide explanations that enable fact-checkers to
scrutinise their outputs. However, it is unclear how these explanations should
align with the decision-making and reasoning processes of fact-checkers to be
effectively integrated into their workflows. Through semi-structured interviews
with fact-checking professionals, we bridge this gap by: (i) providing an
account of how fact-checkers assess evidence, make decisions, and explain their
processes; (ii) examining how fact-checkers use automated tools in practice;
and (iii) identifying fact-checker explanation requirements for automated
fact-checking tools. The findings show unmet explanation needs and identify
important criteria for replicable fact-checking explanations that trace the
model's reasoning path, reference specific evidence, and highlight uncertainty
and information gaps.Summary
AI-Generated Summary