ChatPaper.aiChatPaper

SQuARE:用于增强大型语言模型中思维链的顺序问答推理引擎

SQuARE: Sequential Question Answering Reasoning Engine for Enhanced Chain-of-Thought in Large Language Models

February 13, 2025
作者: Daniel Fleischer, Moshe Berchansky, Gad Markovits, Moshe Wasserblat
cs.AI

摘要

在快速发展的自然语言处理领域,大型语言模型(LLMs)被赋予越来越复杂的推理挑战。传统方法如思维链提示显示出潜力,但往往未能充分利用模型的推理能力。本文介绍了SQuARE(Sequential Question Answering Reasoning Engine),这是一种旨在通过自我询问范式改进推理的新型提示技术。在CoT框架的基础上,SQuARE提示模型在处理主要查询之前生成和解决多个辅助问题,促进对主题各个方面的更全面探索。我们使用Llama 3和GPT-4o模型在多个问答数据集上进行了广泛评估,结果显示SQuARE明显优于传统的CoT提示和现有的重述-回答方法。通过系统分解查询,SQuARE推进了LLM在推理任务中的能力。代码可在https://github.com/IntelLabs/RAG-FiT/tree/square 公开获取。
English
In the rapidly evolving field of Natural Language Processing, Large Language Models (LLMs) are tasked with increasingly complex reasoning challenges. Traditional methods like chain-of-thought prompting have shown promise but often fall short in fully leveraging a model's reasoning capabilities. This paper introduces SQuARE (Sequential Question Answering Reasoning Engine), a novel prompting technique designed to improve reasoning through a self-interrogation paradigm. Building upon CoT frameworks, SQuARE prompts models to generate and resolve multiple auxiliary questions before tackling the main query, promoting a more thorough exploration of various aspects of a topic. Our expansive evaluations, conducted with Llama 3 and GPT-4o models across multiple question-answering datasets, demonstrate that SQuARE significantly surpasses traditional CoT prompts and existing rephrase-and-respond methods. By systematically decomposing queries, SQuARE advances LLM capabilities in reasoning tasks. The code is publicly available at https://github.com/IntelLabs/RAG-FiT/tree/square.

Summary

AI-Generated Summary

PDF162February 14, 2025