CBT-Bench:評估大型語言模型在協助認知行為治療上的表現
CBT-Bench: Evaluating Large Language Models on Assisting Cognitive Behavior Therapy
October 17, 2024
作者: Mian Zhang, Xianjun Yang, Xinlu Zhang, Travis Labrum, Jamie C. Chiu, Shaun M. Eack, Fei Fang, William Yang Wang, Zhiyu Zoey Chen
cs.AI
摘要
當前患者需求與現有的心理健康支援之間存在顯著差距。本文旨在深入探討使用大型語言模型(LLMs)協助專業心理治療的潛力。為此,我們提出一個新的基準,即CBT-BENCH,用於系統評估認知行為治療(CBT)輔助。CBT-BENCH包括三個層次的任務:I:基本CBT知識獲取,包括多項選擇題任務;II:認知模型理解,包括認知扭曲分類、主要核心信念分類和細粒度核心信念分類任務;III:治療反應生成,包括在CBT治療會話中生成對患者言語的回應任務。這些任務涵蓋了CBT的關鍵方面,可能透過AI輔助得以增強,同時概述了能力需求的等級制度,從基本知識背誦到參與真實治療對話。我們在我們的基準上評估了代表性的LLMs。實驗結果表明,雖然LLMs在背誦CBT知識方面表現良好,但在需要深入分析患者認知結構並生成有效回應的複雜現實情境中表現不佳,暗示了潛在的未來工作。
English
There is a significant gap between patient needs and available mental health
support today. In this paper, we aim to thoroughly examine the potential of
using Large Language Models (LLMs) to assist professional psychotherapy. To
this end, we propose a new benchmark, CBT-BENCH, for the systematic evaluation
of cognitive behavioral therapy (CBT) assistance. We include three levels of
tasks in CBT-BENCH: I: Basic CBT knowledge acquisition, with the task of
multiple-choice questions; II: Cognitive model understanding, with the tasks of
cognitive distortion classification, primary core belief classification, and
fine-grained core belief classification; III: Therapeutic response generation,
with the task of generating responses to patient speech in CBT therapy
sessions. These tasks encompass key aspects of CBT that could potentially be
enhanced through AI assistance, while also outlining a hierarchy of capability
requirements, ranging from basic knowledge recitation to engaging in real
therapeutic conversations. We evaluated representative LLMs on our benchmark.
Experimental results indicate that while LLMs perform well in reciting CBT
knowledge, they fall short in complex real-world scenarios requiring deep
analysis of patients' cognitive structures and generating effective responses,
suggesting potential future work.Summary
AI-Generated Summary