ChatPaper.aiChatPaper

深化的LLM思维演进

Evolving Deeper LLM Thinking

January 17, 2025
作者: Kuang-Huei Lee, Ian Fischer, Yueh-Hua Wu, Dave Marwood, Shumeet Baluja, Dale Schuurmans, Xinyun Chen
cs.AI

摘要

我们探讨了一种用于扩展大型语言模型推理时间计算的进化搜索策略。所提出的方法名为“心智进化”,利用语言模型生成、重组和完善候选响应。该方法避免了在解决评估器可用时需要形式化基础推理问题的必要性。在控制推理成本的情况下,我们发现“心智进化”在自然语言规划任务中明显优于其他推理策略,如最佳-N和顺序修订。在TravelPlanner和Natural Plan基准测试中,“心智进化”使用Gemini 1.5 Pro解决了超过98%的问题实例,而无需使用正式求解器。
English
We explore an evolutionary search strategy for scaling inference time compute in Large Language Models. The proposed approach, Mind Evolution, uses a language model to generate, recombine and refine candidate responses. The proposed approach avoids the need to formalize the underlying inference problem whenever a solution evaluator is available. Controlling for inference cost, we find that Mind Evolution significantly outperforms other inference strategies such as Best-of-N and Sequential Revision in natural language planning tasks. In the TravelPlanner and Natural Plan benchmarks, Mind Evolution solves more than 98% of the problem instances using Gemini 1.5 Pro without the use of a formal solver.

Summary

AI-Generated Summary

PDF1145January 20, 2025