ChatPaper.aiChatPaper

思维协同:利用最小自由分解提升大语言模型的思维链推理

Syzygy of Thoughts: Improving LLM CoT with the Minimal Free Resolution

April 13, 2025
作者: Chenghao Li, Chaoning Zhang, Yi Lu, Jiaquan Zhang, Qigan Sun, Xudong Wang, Jiwei Wei, Guoqing Wang, Yang Yang, Heng Tao Shen
cs.AI

摘要

思维链(CoT)提示通过将问题分解为连续步骤,模拟人类逻辑并减少错误,从而增强了大语言模型(LLMs)的推理能力。然而,面对具有广阔解空间和模糊约束的复杂任务,单一推理链往往难以胜任。受交换代数和代数几何中最小自由分解(MFR)的启发,我们提出了思维合系(SoT)——一种通过引入辅助且相互关联的推理路径来扩展CoT的新框架。SoT能够捕捉更深层次的逻辑依赖,实现更稳健和结构化的问题解决。MFR将模块分解为一系列具有最小秩的自由模块,为复杂系统提供了结构化的分析方法。该方法引入了“模块”、“贝蒂数”、“自由性”、“映射”、“精确性”和“最小性”等概念,使得原始复杂问题能够被系统地分解为逻辑完备的最小子问题,同时保留关键问题特征并缩短推理长度。我们在多个数据集(如GSM8K、MATH)和模型(如GPT-4o-mini、Qwen2.5)上测试了SoT,其推理精度达到或超越了主流CoT标准。此外,通过将采样过程与代数约束对齐,我们的方法提升了LLMs推理时间的可扩展性,确保了推理的透明性和高性能。我们的代码将公开发布于https://github.com/dlMARiA/Syzygy-of-thoughts。
English
Chain-of-Thought (CoT) prompting enhances the reasoning of large language models (LLMs) by decomposing problems into sequential steps, mimicking human logic and reducing errors. However, complex tasks with vast solution spaces and vague constraints often exceed the capacity of a single reasoning chain. Inspired by Minimal Free Resolution (MFR) in commutative algebra and algebraic geometry, we propose Syzygy of Thoughts (SoT)-a novel framework that extends CoT by introducing auxiliary, interrelated reasoning paths. SoT captures deeper logical dependencies, enabling more robust and structured problem-solving. MFR decomposes a module into a sequence of free modules with minimal rank, providing a structured analytical approach to complex systems. This method introduces the concepts of "Module", "Betti numbers","Freeness", "Mapping", "Exactness" and "Minimality", enabling the systematic decomposition of the original complex problem into logically complete minimal subproblems while preserving key problem features and reducing reasoning length. We tested SoT across diverse datasets (e.g., GSM8K, MATH) and models (e.g., GPT-4o-mini, Qwen2.5), achieving inference accuracy that matches or surpasses mainstream CoTs standards. Additionally, by aligning the sampling process with algebraic constraints, our approach enhances the scalability of inference time in LLMs, ensuring both transparent reasoning and high performance. Our code will be publicly available at https://github.com/dlMARiA/Syzygy-of-thoughts.

Summary

AI-Generated Summary

PDF102April 17, 2025