YOLOv12:以注意力机制为核心的实时目标检测器
YOLOv12: Attention-Centric Real-Time Object Detectors
February 18, 2025
作者: Yunjie Tian, Qixiang Ye, David Doermann
cs.AI
摘要
长期以来,提升YOLO框架的网络架构一直至关重要,但改进主要集中在基于CNN的优化上,尽管注意力机制在建模能力上已展现出显著优势。这主要是因为基于注意力的模型在速度上无法与基于CNN的模型相媲美。本文提出了一种以注意力为核心的YOLO框架,即YOLOv12,它在保持与先前基于CNN模型相当速度的同时,充分利用了注意力机制带来的性能优势。YOLOv12在准确率上超越了所有流行的实时目标检测器,同时保持了具有竞争力的速度。例如,YOLOv12-N在T4 GPU上实现了40.6%的mAP,推理延迟仅为1.64毫秒,相较于先进的YOLOv10-N/YOLOv11-N,在速度相近的情况下,mAP分别提升了2.1%和1.2%。这一优势同样体现在其他模型规模上。YOLOv12还超越了改进DETR的端到端实时检测器,如RT-DETR/RT-DETRv2:YOLOv12-S在运行速度上比RT-DETR-R18/RT-DETRv2-R18快42%,仅使用了36%的计算量和45%的参数。更多对比详见图1。
English
Enhancing the network architecture of the YOLO framework has been crucial for
a long time, but has focused on CNN-based improvements despite the proven
superiority of attention mechanisms in modeling capabilities. This is because
attention-based models cannot match the speed of CNN-based models. This paper
proposes an attention-centric YOLO framework, namely YOLOv12, that matches the
speed of previous CNN-based ones while harnessing the performance benefits of
attention mechanisms. YOLOv12 surpasses all popular real-time object detectors
in accuracy with competitive speed. For example, YOLOv12-N achieves 40.6% mAP
with an inference latency of 1.64 ms on a T4 GPU, outperforming advanced
YOLOv10-N / YOLOv11-N by 2.1%/1.2% mAP with a comparable speed. This advantage
extends to other model scales. YOLOv12 also surpasses end-to-end real-time
detectors that improve DETR, such as RT-DETR / RT-DETRv2: YOLOv12-S beats
RT-DETR-R18 / RT-DETRv2-R18 while running 42% faster, using only 36% of the
computation and 45% of the parameters. More comparisons are shown in Figure 1.Summary
AI-Generated Summary