深度實驗室:從部分到完整

DepthLab: From Partial to Complete

December 24, 2024
作者: Zhiheng Liu, Ka Leong Cheng, Qiuyu Wang, Shuzhe Wang, Hao Ouyang, Bin Tan, Kai Zhu, Yujun Shen, Qifeng Chen, Ping Luo
cs.AI

摘要

在各種應用中,缺失值仍然是深度資料的一個常見挑戰,這源於各種原因,如資料獲取不完整和視角改變。本研究通過DepthLab來彌補這一差距,這是一個基於圖像擴散先驗的基礎深度修補模型。我們的模型具有兩個顯著優勢:(1) 它對於深度不足的區域表現出韌性,可為連續區域和孤立點提供可靠的完成,以及 (2) 當填補缺失值時,它能忠實地保留與條件已知深度的比例一致性。基於這些優勢,我們的方法在各種下游任務中證明了其價值,包括3D場景修補、文本到3D場景生成、使用DUST3R進行稀疏視圖重建以及LiDAR深度完成,在數值性能和視覺質量方面均超越了當前解決方案。我們的項目頁面和源代碼可在https://johanan528.github.io/depthlab_web/找到。
English
Missing values remain a common challenge for depth data across its wide range of applications, stemming from various causes like incomplete data acquisition and perspective alteration. This work bridges this gap with DepthLab, a foundation depth inpainting model powered by image diffusion priors. Our model features two notable strengths: (1) it demonstrates resilience to depth-deficient regions, providing reliable completion for both continuous areas and isolated points, and (2) it faithfully preserves scale consistency with the conditioned known depth when filling in missing values. Drawing on these advantages, our approach proves its worth in various downstream tasks, including 3D scene inpainting, text-to-3D scene generation, sparse-view reconstruction with DUST3R, and LiDAR depth completion, exceeding current solutions in both numerical performance and visual quality. Our project page with source code is available at https://johanan528.github.io/depthlab_web/.

Summary

AI-Generated Summary

PDF342December 25, 2024