ChatPaper.aiChatPaper

DiET-GS:基于扩散先验与事件流辅助的运动去模糊3D高斯溅射

DiET-GS: Diffusion Prior and Event Stream-Assisted Motion Deblurring 3D Gaussian Splatting

March 31, 2025
作者: Seungjun Lee, Gim Hee Lee
cs.AI

摘要

从模糊的多视角图像中重建清晰的三维表示一直是计算机视觉领域长期存在的难题。近期研究尝试通过利用事件相机来增强运动模糊下的高质量新视角合成,得益于其高动态范围和微秒级的时间分辨率。然而,这些方法往往在恢复不准确的颜色或丢失细粒度细节方面表现欠佳。本文提出DiET-GS,一种结合扩散先验和事件流辅助的运动去模糊3D高斯溅射(3DGS)框架。我们的框架在两阶段训练策略中有效利用了无模糊事件流和扩散先验。具体而言,我们引入了一种新颖的框架,通过事件双重积分来约束3DGS,从而实现准确的颜色和清晰的细节。此外,我们提出了一种简单技术,利用扩散先验进一步增强边缘细节。在合成数据和真实数据上的定性与定量结果表明,与现有基线相比,我们的DiET-GS能够生成质量显著更优的新视角。项目页面请访问https://diet-gs.github.io。
English
Reconstructing sharp 3D representations from blurry multi-view images are long-standing problem in computer vision. Recent works attempt to enhance high-quality novel view synthesis from the motion blur by leveraging event-based cameras, benefiting from high dynamic range and microsecond temporal resolution. However, they often reach sub-optimal visual quality in either restoring inaccurate color or losing fine-grained details. In this paper, we present DiET-GS, a diffusion prior and event stream-assisted motion deblurring 3DGS. Our framework effectively leverages both blur-free event streams and diffusion prior in a two-stage training strategy. Specifically, we introduce the novel framework to constraint 3DGS with event double integral, achieving both accurate color and well-defined details. Additionally, we propose a simple technique to leverage diffusion prior to further enhance the edge details. Qualitative and quantitative results on both synthetic and real-world data demonstrate that our DiET-GS is capable of producing significantly better quality of novel views compared to the existing baselines. Our project page is https://diet-gs.github.io

Summary

AI-Generated Summary

PDF32April 2, 2025