ChatPaper.aiChatPaper

GRS-QA -- 圖推理結構化問答資料集

GRS-QA -- Graph Reasoning-Structured Question Answering Dataset

November 1, 2024
作者: Anish Pahilajani, Devasha Trivedi, Jincen Shuai, Khin S. Yone, Samyak Rajesh Jain, Namyong Park, Ryan A. Rossi, Nesreen K. Ahmed, Franck Dernoncourt, Yu Wang
cs.AI

摘要

大型語言模型(LLMs)在多跳問答(M-QA)方面表現優異,這要歸功於其先進的推理能力。然而,固有推理結構對LLM M-QA表現的影響仍不清楚,主要是因為缺乏提供細緻推理結構的QA數據集。為了填補這一空白,我們引入了圖推理結構問答數據集(GRS-QA),該數據集為QA對提供了語義上下文和推理結構。與現有的M-QA數據集不同,GRS-QA明確捕捉了複雜的推理路徑,通過構建推理圖,其中節點代表文本上下文,邊表示邏輯流。這些不同結構的推理圖使得可以對LLM在各種推理結構下的推理能力進行細緻評估。我們的實證分析顯示,LLMs在處理具有不同推理結構的問題時表現不同。這一發現有助於與語義相比,探索文本結構。
English
Large Language Models (LLMs) have excelled in multi-hop question-answering (M-QA) due to their advanced reasoning abilities. However, the impact of the inherent reasoning structures on LLM M-QA performance remains unclear, largely due to the absence of QA datasets that provide fine-grained reasoning structures. To address this gap, we introduce the Graph Reasoning-Structured Question Answering Dataset (GRS-QA), which includes both semantic contexts and reasoning structures for QA pairs. Unlike existing M-QA datasets, where different reasoning structures are entangled together, GRS-QA explicitly captures intricate reasoning pathways by constructing reasoning graphs, where nodes represent textual contexts and edges denote logical flows. These reasoning graphs of different structures enable a fine-grained evaluation of LLM reasoning capabilities across various reasoning structures. Our empirical analysis reveals that LLMs perform differently when handling questions with varying reasoning structures. This finding facilitates the exploration of textual structures as compared with semantics.

Summary

AI-Generated Summary

PDF72November 13, 2024