ChatPaper.aiChatPaper

思維的合相:以最小自由解析提升LLM的思維鏈

Syzygy of Thoughts: Improving LLM CoT with the Minimal Free Resolution

April 13, 2025
作者: Chenghao Li, Chaoning Zhang, Yi Lu, Jiaquan Zhang, Qigan Sun, Xudong Wang, Jiwei Wei, Guoqing Wang, Yang Yang, Heng Tao Shen
cs.AI

摘要

鏈式思維(CoT)提示通過將問題分解為連續步驟,模仿人類邏輯並減少錯誤,從而增強大型語言模型(LLMs)的推理能力。然而,具有廣闊解空間和模糊約束的複雜任務往往超出單一推理鏈的能力範圍。受交換代數和代數幾何中最小自由分解(MFR)的啟發,我們提出了思維的合衝(SoT)——一個通過引入輔助且相互關聯的推理路徑來擴展CoT的新框架。SoT捕捉更深層的邏輯依賴,實現更為穩健和結構化的問題解決。MFR將模塊分解為一系列具有最小秩的自由模塊,為複雜系統提供結構化的分析方法。該方法引入了“模塊”、“Betti數”、“自由性”、“映射”、“精確性”和“最小性”等概念,使原始複雜問題能夠系統地分解為邏輯完整的最小子問題,同時保留關鍵問題特徵並縮短推理長度。我們在多樣化的數據集(如GSM8K、MATH)和模型(如GPT-4o-mini、Qwen2.5)上測試了SoT,其推理準確性達到或超越了主流CoT標準。此外,通過將採樣過程與代數約束對齊,我們的方法提升了LLMs推理時間的可擴展性,確保了推理的透明性和高性能。我們的代碼將公開於https://github.com/dlMARiA/Syzygy-of-thoughts。
English
Chain-of-Thought (CoT) prompting enhances the reasoning of large language models (LLMs) by decomposing problems into sequential steps, mimicking human logic and reducing errors. However, complex tasks with vast solution spaces and vague constraints often exceed the capacity of a single reasoning chain. Inspired by Minimal Free Resolution (MFR) in commutative algebra and algebraic geometry, we propose Syzygy of Thoughts (SoT)-a novel framework that extends CoT by introducing auxiliary, interrelated reasoning paths. SoT captures deeper logical dependencies, enabling more robust and structured problem-solving. MFR decomposes a module into a sequence of free modules with minimal rank, providing a structured analytical approach to complex systems. This method introduces the concepts of "Module", "Betti numbers","Freeness", "Mapping", "Exactness" and "Minimality", enabling the systematic decomposition of the original complex problem into logically complete minimal subproblems while preserving key problem features and reducing reasoning length. We tested SoT across diverse datasets (e.g., GSM8K, MATH) and models (e.g., GPT-4o-mini, Qwen2.5), achieving inference accuracy that matches or surpasses mainstream CoTs standards. Additionally, by aligning the sampling process with algebraic constraints, our approach enhances the scalability of inference time in LLMs, ensuring both transparent reasoning and high performance. Our code will be publicly available at https://github.com/dlMARiA/Syzygy-of-thoughts.

Summary

AI-Generated Summary

PDF82April 17, 2025