CRANE:受限LLM生成推理
CRANE: Reasoning with constrained LLM generation
February 13, 2025
作者: Debangshu Banerjee, Tarun Suresh, Shubham Ugare, Sasa Misailovic, Gagandeep Singh
cs.AI
摘要
代码生成、符号数学推理和其他任务需要LLMs生成既符法又语义正确的输出。约束LLM生成是一种有前途的方法,可以强制遵守形式语法,但先前的研究经验表明,严格执行形式约束通常会削弱LLMs的推理能力。在这项工作中,我们首先提供了一个理论解释,说明为什么将LLM输出限制在只允许语法有效最终答案的非常严格语法中会降低模型的推理能力。其次,我们展示通过向输出语法添加精心设计的附加规则,始终可以保留LLM的推理能力,同时确保其输出的语法和语义正确性。基于这些理论见解,我们提出了一种推理增强的约束解码算法CRANE,有效地平衡了约束生成的正确性和无约束生成的灵活性。在多个开源LLMs和基准测试上的实验表明,CRANE在具有挑战性的符号推理基准测试GSM-symbolic和FOLIO上比最先进的约束解码策略和标准无约束解码表现显著优越,准确性提高了高达10%。
English
Code generation, symbolic math reasoning, and other tasks require LLMs to
produce outputs that are both syntactically and semantically correct.
Constrained LLM generation is a promising direction to enforce adherence to
formal grammar, but prior works have empirically observed that strict
enforcement of formal constraints often diminishes the reasoning capabilities
of LLMs. In this work, we first provide a theoretical explanation for why
constraining LLM outputs to very restrictive grammars that only allow
syntactically valid final answers reduces the reasoning capabilities of the
model. Second, we demonstrate that by augmenting the output grammar with
carefully designed additional rules, it is always possible to preserve the
reasoning capabilities of the LLM while ensuring syntactic and semantic
correctness in its outputs. Building on these theoretical insights, we propose
a reasoning-augmented constrained decoding algorithm, CRANE, which effectively
balances the correctness of constrained generation with the flexibility of
unconstrained generation. Experiments on multiple open-source LLMs and
benchmarks show that CRANE significantly outperforms both state-of-the-art
constrained decoding strategies and standard unconstrained decoding, showing up
to 10% points accuracy improvement over baselines on challenging symbolic
reasoning benchmarks GSM-symbolic and FOLIO.Summary
AI-Generated Summary