ChatPaper.aiChatPaper

EfficientViM:基于隐藏状态混合器的高效视觉曼巴与状态空间二元性

EfficientViM: Efficient Vision Mamba with Hidden State Mixer based State Space Duality

November 22, 2024
作者: Sanghyeok Lee, Joonmyung Choi, Hyunwoo J. Kim
cs.AI

摘要

为了在资源受限的环境中部署神经网络,先前的研究构建了轻量级架构,其中包括卷积和注意力机制,分别用于捕获局部和全局依赖关系。最近,状态空间模型作为一种有效的全局令牌交互方式出现,其在令牌数量方面具有线性计算成本优势。然而,利用SSM构建的高效视觉骨干网络研究较少。在本文中,我们介绍了一种新型架构Efficient Vision Mamba(EfficientViM),它基于基于隐藏状态混合器的状态空间对偶(HSM-SSD)构建,能够高效地捕获全局依赖关系,并进一步降低计算成本。在HSM-SSD层中,我们重新设计了先前的SSD层,以实现隐藏状态内的通道混合操作。此外,我们提出了多阶段隐藏状态融合,进一步增强隐藏状态的表示能力,并提供了减轻由内存绑定操作引起的瓶颈的设计。因此,EfficientViM系列在ImageNet-1k数据集上实现了新的速度-准确性权衡的最新水平,比第二好的模型SHViT提高了高达0.7%的性能,并具有更快的速度。此外,与先前的研究相比,在扩展图像大小或使用蒸馏训练时,我们观察到吞吐量和准确性方面的显著改进。代码可在https://github.com/mlvlab/EfficientViM找到。
English
For the deployment of neural networks in resource-constrained environments, prior works have built lightweight architectures with convolution and attention for capturing local and global dependencies, respectively. Recently, the state space model has emerged as an effective global token interaction with its favorable linear computational cost in the number of tokens. Yet, efficient vision backbones built with SSM have been explored less. In this paper, we introduce Efficient Vision Mamba (EfficientViM), a novel architecture built on hidden state mixer-based state space duality (HSM-SSD) that efficiently captures global dependencies with further reduced computational cost. In the HSM-SSD layer, we redesign the previous SSD layer to enable the channel mixing operation within hidden states. Additionally, we propose multi-stage hidden state fusion to further reinforce the representation power of hidden states, and provide the design alleviating the bottleneck caused by the memory-bound operations. As a result, the EfficientViM family achieves a new state-of-the-art speed-accuracy trade-off on ImageNet-1k, offering up to a 0.7% performance improvement over the second-best model SHViT with faster speed. Further, we observe significant improvements in throughput and accuracy compared to prior works, when scaling images or employing distillation training. Code is available at https://github.com/mlvlab/EfficientViM.

Summary

AI-Generated Summary

PDF62November 27, 2024