推理SQL:基于SQL定制化部分奖励的强化学习 用于推理增强的文本到SQL转换
Reasoning-SQL: Reinforcement Learning with SQL Tailored Partial Rewards for Reasoning-Enhanced Text-to-SQL
March 29, 2025
作者: Mohammadreza Pourreza, Shayan Talaei, Ruoxi Sun, Xingchen Wan, Hailong Li, Azalia Mirhoseini, Amin Saberi, Sercan "O. Arik
cs.AI
摘要
文本到SQL转换是一项具有挑战性的任务,涉及多个需要深度推理的子任务,包括自然语言理解、数据库模式解析以及精确的SQL查询构建。现有方法通常依赖于带有归纳偏置的手工推理路径,这可能会限制其整体效能。受近期如DeepSeek R1和OpenAI o1等推理增强模型成功的启发,这些模型有效利用奖励驱动的自我探索来提升推理能力和泛化性能,我们提出了一套专为文本到SQL任务设计的部分奖励机制。这套奖励机制包含模式链接、AI反馈、n-gram相似度及语法检查,旨在直接解决强化学习(RL)中普遍存在的奖励稀疏问题。通过采用群体相对策略优化(GRPO),我们的方法明确鼓励大型语言模型(LLMs)发展出生成准确SQL查询所需的内在推理技能。在不同规模的模型上,我们展示了仅使用我们提出的奖励进行RL训练,相较于监督微调(SFT),能够持续实现更高的准确率和更优的泛化能力。值得注意的是,我们通过RL训练的14B参数模型在BIRD基准测试中显著超越了更大的专有模型,例如o3-mini高出4%,Gemini-1.5-Pro-002高出3%。这些成果凸显了我们提出的带有部分奖励的RL训练框架在提升文本到SQL任务准确性和推理能力方面的有效性。
English
Text-to-SQL is a challenging task involving multiple reasoning-intensive
subtasks, including natural language understanding, database schema
comprehension, and precise SQL query formulation. Existing approaches often
rely on handcrafted reasoning paths with inductive biases that can limit their
overall effectiveness. Motivated by the recent success of reasoning-enhanced
models such as DeepSeek R1 and OpenAI o1, which effectively leverage
reward-driven self-exploration to enhance reasoning capabilities and
generalization, we propose a novel set of partial rewards tailored specifically
for the Text-to-SQL task. Our reward set includes schema-linking, AI feedback,
n-gram similarity, and syntax check, explicitly designed to address the reward
sparsity issue prevalent in reinforcement learning (RL). Leveraging group
relative policy optimization (GRPO), our approach explicitly encourages large
language models (LLMs) to develop intrinsic reasoning skills necessary for
accurate SQL query generation. With models of different sizes, we demonstrate
that RL-only training with our proposed rewards consistently achieves higher
accuracy and superior generalization compared to supervised fine-tuning (SFT).
Remarkably, our RL-trained 14B-parameter model significantly outperforms larger
proprietary models, e.g. o3-mini by 4% and Gemini-1.5-Pro-002 by 3% on the BIRD
benchmark. These highlight the efficacy of our proposed RL-training framework
with partial rewards for enhancing both accuracy and reasoning capabilities in
Text-to-SQL tasks.Summary
AI-Generated Summary