NeuGrasp:基于背景先验的通用神经表面重建技术,用于材质无关的物体抓取检测
NeuGrasp: Generalizable Neural Surface Reconstruction with Background Priors for Material-Agnostic Object Grasp Detection
March 5, 2025
作者: Qingyu Fan, Yinghao Cai, Chao Li, Wenzhe He, Xudong Zheng, Tao Lu, Bin Liang, Shuo Wang
cs.AI
摘要
在包含透明和镜面物体的场景中,机器人抓取对依赖精确深度信息的方法提出了巨大挑战。本文介绍了一种名为NeuGrasp的神经表面重建方法,该方法利用背景先验实现材料无关的抓取检测。NeuGrasp通过整合Transformer和全局先验体素,结合空间编码聚合多视角特征,从而在视角狭窄且稀疏的条件下实现鲁棒的表面重建。通过残差特征增强聚焦前景物体,并利用占据先验体素优化空间感知,NeuGrasp在处理具有透明和镜面表面的物体时表现出色。在仿真和真实场景中的大量实验表明,NeuGrasp在抓取任务上超越了现有最先进方法,同时保持了相当的重建质量。更多详情请访问https://neugrasp.github.io/。
English
Robotic grasping in scenes with transparent and specular objects presents
great challenges for methods relying on accurate depth information. In this
paper, we introduce NeuGrasp, a neural surface reconstruction method that
leverages background priors for material-agnostic grasp detection. NeuGrasp
integrates transformers and global prior volumes to aggregate multi-view
features with spatial encoding, enabling robust surface reconstruction in
narrow and sparse viewing conditions. By focusing on foreground objects through
residual feature enhancement and refining spatial perception with an
occupancy-prior volume, NeuGrasp excels in handling objects with transparent
and specular surfaces. Extensive experiments in both simulated and real-world
scenarios show that NeuGrasp outperforms state-of-the-art methods in grasping
while maintaining comparable reconstruction quality. More details are available
at https://neugrasp.github.io/.Summary
AI-Generated Summary