EVER:用於實時視角合成的精確體積橢球渲染
EVER: Exact Volumetric Ellipsoid Rendering for Real-time View Synthesis
October 2, 2024
作者: Alexander Mai, Peter Hedman, George Kopanas, Dor Verbin, David Futschik, Qiangeng Xu, Falko Kuester, Jon Barron, Yinda Zhang
cs.AI
摘要
我們提出了精確體積橢圓渲染(EVER),這是一種用於實時可微的僅發射體積渲染的方法。與最近基於光柵化的3D高斯擴散(3DGS)方法不同,我們基於基元的表示允許進行精確的體積渲染,而不是對3D高斯看板進行α合成。因此,與3DGS不同,我們的公式不會出現跳躍異常和視角依賴密度,但仍然在NVIDIA RTX4090上實現720p下約30 FPS的幀速率。由於我們的方法建立在光線追踪之上,它可以實現像散焦模糊和相機失真(例如來自魚眼相機)這樣的效果,這些效果在光柵化中難以實現。我們展示了我們的方法比3DGS更準確,並且在視角一致渲染方面的後續工作上更少出現混合問題,特別是在Zip-NeRF數據集中具有挑戰性的大型場景中,它在實時技術中實現了最銳利的結果。
English
We present Exact Volumetric Ellipsoid Rendering (EVER), a method for
real-time differentiable emission-only volume rendering. Unlike recent
rasterization based approach by 3D Gaussian Splatting (3DGS), our primitive
based representation allows for exact volume rendering, rather than alpha
compositing 3D Gaussian billboards. As such, unlike 3DGS our formulation does
not suffer from popping artifacts and view dependent density, but still
achieves frame rates of sim!30 FPS at 720p on an NVIDIA RTX4090. Since our
approach is built upon ray tracing it enables effects such as defocus blur and
camera distortion (e.g. such as from fisheye cameras), which are difficult to
achieve by rasterization. We show that our method is more accurate with fewer
blending issues than 3DGS and follow-up work on view-consistent rendering,
especially on the challenging large-scale scenes from the Zip-NeRF dataset
where it achieves sharpest results among real-time techniques.Summary
AI-Generated Summary