ChatPaper.aiChatPaper

FrugalNeRF:無需學習先驗知識的少樣本新視角合成快速收斂

FrugalNeRF: Fast Convergence for Few-shot Novel View Synthesis without Learned Priors

October 21, 2024
作者: Chin-Yang Lin, Chung-Ho Wu, Chang-Han Yeh, Shih-Han Yen, Cheng Sun, Yu-Lun Liu
cs.AI

摘要

在少樣本情況下,神經輻射場(Neural Radiance Fields,NeRF)面臨著重大挑戰,主要是由於過度擬合和高保真渲染的長時間訓練。現有方法,如FreeNeRF和SparseNeRF,使用頻率正則化或預先訓練的先驗,但在複雜的調度和偏差方面存在困難。我們引入了FrugalNeRF,這是一種新穎的少樣本NeRF框架,它利用跨多個尺度共享權重的體素來有效表示場景細節。我們的主要貢獻是一種跨尺度幾何適應方案,根據跨尺度的投影錯誤選擇虛擬地面真實深度。這在訓練過程中引導,而無需依賴外部學習的先驗,實現了對訓練數據的充分利用。它還可以集成預先訓練的先驗,提高質量而不會減慢收斂速度。在LLFF、DTU和RealEstate-10K上的實驗表明,FrugalNeRF優於其他少樣本NeRF方法,同時顯著減少訓練時間,使其成為高效準確的三維場景重建的實用解決方案。
English
Neural Radiance Fields (NeRF) face significant challenges in few-shot scenarios, primarily due to overfitting and long training times for high-fidelity rendering. Existing methods, such as FreeNeRF and SparseNeRF, use frequency regularization or pre-trained priors but struggle with complex scheduling and bias. We introduce FrugalNeRF, a novel few-shot NeRF framework that leverages weight-sharing voxels across multiple scales to efficiently represent scene details. Our key contribution is a cross-scale geometric adaptation scheme that selects pseudo ground truth depth based on reprojection errors across scales. This guides training without relying on externally learned priors, enabling full utilization of the training data. It can also integrate pre-trained priors, enhancing quality without slowing convergence. Experiments on LLFF, DTU, and RealEstate-10K show that FrugalNeRF outperforms other few-shot NeRF methods while significantly reducing training time, making it a practical solution for efficient and accurate 3D scene reconstruction.

Summary

AI-Generated Summary

PDF842November 16, 2024