ChatPaper.aiChatPaper

2024年BOP挑战赛:基于模型与无模型的6D物体姿态估计

BOP Challenge 2024 on Model-Based and Model-Free 6D Object Pose Estimation

April 3, 2025
作者: Van Nguyen Nguyen, Stephen Tyree, Andrew Guo, Mederic Fourmy, Anas Gouda, Taeyeop Lee, Sungphill Moon, Hyeontae Son, Lukas Ranftl, Jonathan Tremblay, Eric Brachmann, Bertram Drost, Vincent Lepetit, Carsten Rother, Stan Birchfield, Jiri Matas, Yann Labbe, Martin Sundermeyer, Tomas Hodan
cs.AI

摘要

我们在此介绍2024年BOP挑战赛的评估方法、数据集及成果,这是为捕捉6D物体姿态估计及相关任务最新技术水平而举办的系列公开竞赛中的第六届。2024年,我们的目标是将BOP从实验室环境过渡到真实世界场景。首先,我们引入了无需3D物体模型的新任务,即方法仅需通过提供的参考视频即可完成物体上架。其次,我们定义了一个更为实用的6D物体检测任务,其中测试图像中可见物体的身份信息不再作为输入提供。第三,我们推出了使用高分辨率传感器和AR/VR头戴设备录制的新BOP-H3数据集,这些数据集高度模拟了真实世界场景。BOP-H3包含3D模型和上架视频,以支持基于模型和无模型的任务。参赛者在七个挑战赛道上展开竞争,每个赛道由任务、物体上架设置和数据集组定义。值得注意的是,2024年针对未见物体基于模型的6D定位最佳方法(FreeZeV2.1)在BOP-Classic-Core上的准确率比2023年最佳方法(GenFlow)高出22%,尽管速度显著较慢(每幅图像24.9秒对比2.7秒),但与2023年针对已见物体的最佳方法(GPose2023)相比仅落后4%。更为实用的2024年方法是Co-op,每幅图像仅需0.8秒,比GenFlow快25倍且准确率高出13%。在6D检测任务上,方法的排名与6D定位相似,但运行时间更长。在针对未见物体基于模型的2D检测中,2024年最佳方法(MUSE)相较于2023年最佳方法(CNOS)实现了21%的相对提升。然而,未见物体的2D检测准确率仍显著低于已见物体(GDet2023)的准确率,差距为53%。在线评估系统持续开放,访问地址为http://bop.felk.cvut.cz/。
English
We present the evaluation methodology, datasets and results of the BOP Challenge 2024, the sixth in a series of public competitions organized to capture the state of the art in 6D object pose estimation and related tasks. In 2024, our goal was to transition BOP from lab-like setups to real-world scenarios. First, we introduced new model-free tasks, where no 3D object models are available and methods need to onboard objects just from provided reference videos. Second, we defined a new, more practical 6D object detection task where identities of objects visible in a test image are not provided as input. Third, we introduced new BOP-H3 datasets recorded with high-resolution sensors and AR/VR headsets, closely resembling real-world scenarios. BOP-H3 include 3D models and onboarding videos to support both model-based and model-free tasks. Participants competed on seven challenge tracks, each defined by a task, object onboarding setup, and dataset group. Notably, the best 2024 method for model-based 6D localization of unseen objects (FreeZeV2.1) achieves 22% higher accuracy on BOP-Classic-Core than the best 2023 method (GenFlow), and is only 4% behind the best 2023 method for seen objects (GPose2023) although being significantly slower (24.9 vs 2.7s per image). A more practical 2024 method for this task is Co-op which takes only 0.8s per image and is 25X faster and 13% more accurate than GenFlow. Methods have a similar ranking on 6D detection as on 6D localization but higher run time. On model-based 2D detection of unseen objects, the best 2024 method (MUSE) achieves 21% relative improvement compared to the best 2023 method (CNOS). However, the 2D detection accuracy for unseen objects is still noticealy (-53%) behind the accuracy for seen objects (GDet2023). The online evaluation system stays open and is available at http://bop.felk.cvut.cz/

Summary

AI-Generated Summary

PDF52April 8, 2025