SynWorld:面向智能体行为知识精炼的虚拟场景合成
SynWorld: Virtual Scenario Synthesis for Agentic Action Knowledge Refinement
April 4, 2025
作者: Runnan Fang, Xiaobin Wang, Yuan Liang, Shuofei Qiao, Jialong Wu, Zekun Xi, Ningyu Zhang, Yong Jiang, Pengjun Xie, Fei Huang, Huajun Chen
cs.AI
摘要
在智能體與其環境的互動中,智能體通過規劃和執行行動來擴展其能力。然而,基於大型語言模型(LLM)的智能體在部署於新環境或需要應對非傳統行動空間時,面臨著重大挑戰。為了賦予智能體自主探索環境、優化工作流程以及增強對行動理解的能力,我們提出了SynWorld框架。該框架允許智能體在行動空間內合成多步驟行動調用的可能場景,並執行蒙特卡洛樹搜索(MCTS)探索,以有效提煉其在當前環境中的行動知識。我們的實驗表明,SynWorld是一種在新環境中學習行動知識的有效且通用的方法。代碼可在https://github.com/zjunlp/SynWorld獲取。
English
In the interaction between agents and their environments, agents expand their
capabilities by planning and executing actions. However, LLM-based agents face
substantial challenges when deployed in novel environments or required to
navigate unconventional action spaces. To empower agents to autonomously
explore environments, optimize workflows, and enhance their understanding of
actions, we propose SynWorld, a framework that allows agents to synthesize
possible scenarios with multi-step action invocation within the action space
and perform Monte Carlo Tree Search (MCTS) exploration to effectively refine
their action knowledge in the current environment. Our experiments demonstrate
that SynWorld is an effective and general approach to learning action knowledge
in new environments. Code is available at https://github.com/zjunlp/SynWorld.Summary
AI-Generated Summary