ChatPaper.aiChatPaper

基于蒙特卡洛扩散的通用学习型RANSAC方法

Monte Carlo Diffusion for Generalizable Learning-Based RANSAC

March 12, 2025
作者: Jiale Wang, Chen Zhao, Wei Ke, Tong Zhang
cs.AI

摘要

随机采样一致性(RANSAC)是一种从含噪数据中稳健估计参数模型的基础方法。现有的基于学习的RANSAC方法利用深度学习来增强RANSAC对异常值的鲁棒性。然而,这些方法在训练和测试时使用的是由相同算法生成的数据,导致在推理时对分布外数据的泛化能力有限。因此,本文提出了一种新颖的基于扩散的范式,通过逐步向真实数据注入噪声,模拟训练基于学习的RANSAC时的噪声环境。为了增强数据多样性,我们将蒙特卡洛采样融入扩散范式,通过在多个阶段引入不同类型的随机性,近似多样化的数据分布。我们在ScanNet和MegaDepth数据集上通过特征匹配的全面实验评估了我们的方法。实验结果表明,我们的蒙特卡洛扩散机制显著提升了基于学习的RANSAC的泛化能力。我们还进行了广泛的消融研究,验证了框架中关键组件的有效性。
English
Random Sample Consensus (RANSAC) is a fundamental approach for robustly estimating parametric models from noisy data. Existing learning-based RANSAC methods utilize deep learning to enhance the robustness of RANSAC against outliers. However, these approaches are trained and tested on the data generated by the same algorithms, leading to limited generalization to out-of-distribution data during inference. Therefore, in this paper, we introduce a novel diffusion-based paradigm that progressively injects noise into ground-truth data, simulating the noisy conditions for training learning-based RANSAC. To enhance data diversity, we incorporate Monte Carlo sampling into the diffusion paradigm, approximating diverse data distributions by introducing different types of randomness at multiple stages. We evaluate our approach in the context of feature matching through comprehensive experiments on the ScanNet and MegaDepth datasets. The experimental results demonstrate that our Monte Carlo diffusion mechanism significantly improves the generalization ability of learning-based RANSAC. We also develop extensive ablation studies that highlight the effectiveness of key components in our framework.

Summary

AI-Generated Summary

PDF12March 13, 2025