ChatPaper.aiChatPaper

TPDiff:时序金字塔视频扩散模型

TPDiff: Temporal Pyramid Video Diffusion Model

March 12, 2025
作者: Lingmin Ran, Mike Zheng Shou
cs.AI

摘要

视频扩散模型的发展揭示了一个重大挑战:巨大的计算需求。为缓解这一挑战,我们注意到扩散的反向过程具有固有的熵减特性。鉴于视频模态中帧间存在冗余,在高熵阶段维持全帧率并无必要。基于这一洞察,我们提出了TPDiff,一个统一的框架,旨在提升训练和推理效率。通过将扩散过程划分为多个阶段,我们的框架在扩散过程中逐步提升帧率,仅最后一个阶段以全帧率运行,从而优化计算效率。为了训练多阶段扩散模型,我们引入了一种专门的训练框架:分阶段扩散。通过在对齐的数据和噪声下求解扩散的分段概率流常微分方程(ODE),我们的训练策略适用于多种扩散形式,并进一步提升了训练效率。全面的实验评估验证了我们方法的普适性,展示了训练成本降低50%和推理效率提升1.5倍的显著成效。
English
The development of video diffusion models unveils a significant challenge: the substantial computational demands. To mitigate this challenge, we note that the reverse process of diffusion exhibits an inherent entropy-reducing nature. Given the inter-frame redundancy in video modality, maintaining full frame rates in high-entropy stages is unnecessary. Based on this insight, we propose TPDiff, a unified framework to enhance training and inference efficiency. By dividing diffusion into several stages, our framework progressively increases frame rate along the diffusion process with only the last stage operating on full frame rate, thereby optimizing computational efficiency. To train the multi-stage diffusion model, we introduce a dedicated training framework: stage-wise diffusion. By solving the partitioned probability flow ordinary differential equations (ODE) of diffusion under aligned data and noise, our training strategy is applicable to various diffusion forms and further enhances training efficiency. Comprehensive experimental evaluations validate the generality of our method, demonstrating 50% reduction in training cost and 1.5x improvement in inference efficiency.

Summary

AI-Generated Summary

PDF373March 13, 2025