TAG:一种去中心化的多智能体分层强化学习框架
TAG: A Decentralized Framework for Multi-Agent Hierarchical Reinforcement Learning
February 21, 2025
作者: Giuseppe Paolo, Abdelhakim Benechehab, Hamza Cherkaoui, Albert Thomas, Balázs Kégl
cs.AI
摘要
层次化组织是生物系统和人类社会的基石,然而人工智能系统往往依赖单一架构,这限制了其适应性和可扩展性。当前的层次化强化学习(HRL)方法通常将层次限制为两级或需要集中式训练,从而限制了其实际应用。我们提出了TAME智能体框架(TAG),这是一个构建完全去中心化层次化多智能体系统的框架。TAG通过新颖的LevelEnv概念,将每一层次抽象为上层智能体的环境,从而支持任意深度的层次结构。这种方法在保持松散耦合的同时,标准化了层级间的信息流动,使得不同类型智能体能够无缝集成。我们通过实现跨多个层次结合不同RL智能体的层次化架构,证明了TAG的有效性,并在标准基准测试中超越了传统多智能体RL基线。我们的结果表明,去中心化的层次化组织不仅提升了学习速度,还提高了最终性能,使TAG成为可扩展多智能体系统的一个有前景的方向。
English
Hierarchical organization is fundamental to biological systems and human
societies, yet artificial intelligence systems often rely on monolithic
architectures that limit adaptability and scalability. Current hierarchical
reinforcement learning (HRL) approaches typically restrict hierarchies to two
levels or require centralized training, which limits their practical
applicability. We introduce TAME Agent Framework (TAG), a framework for
constructing fully decentralized hierarchical multi-agent systems.TAG enables
hierarchies of arbitrary depth through a novel LevelEnv concept, which
abstracts each hierarchy level as the environment for the agents above it. This
approach standardizes information flow between levels while preserving loose
coupling, allowing for seamless integration of diverse agent types. We
demonstrate the effectiveness of TAG by implementing hierarchical architectures
that combine different RL agents across multiple levels, achieving improved
performance over classical multi-agent RL baselines on standard benchmarks. Our
results show that decentralized hierarchical organization enhances both
learning speed and final performance, positioning TAG as a promising direction
for scalable multi-agent systems.Summary
AI-Generated Summary