增強視頻擴散取樣的時空跳躍引導

Spatiotemporal Skip Guidance for Enhanced Video Diffusion Sampling

November 27, 2024
作者: Junha Hyung, Kinam Kim, Susung Hong, Min-Jung Kim, Jaegul Choo
cs.AI

摘要

擴散模型已成為生成高品質圖像、影片和3D內容的強大工具。儘管像CFG這樣的採樣引導技術可以提高品質,但會降低多樣性和動態效果。自動引導可以緩解這些問題,但需要額外的弱模型訓練,限制了其在大規模模型上的實用性。在這項工作中,我們引入了時空跳躍引導(STG),這是一種簡單的無需訓練的採樣引導方法,用於增強基於Transformer的視頻擴散模型。STG通過自我擾動使用隱式的弱模型,避免了對外部模型或額外訓練的需求。通過有選擇性地跳過時空層,STG生成原始模型的對齊、降級版本,以提高樣本質量,同時不影響多樣性或動態程度。我們的貢獻包括:(1)將STG引入作為一種高效、高性能的視頻擴散模型引導技術,(2)通過模擬弱模型進行層跳躍,消除了對輔助模型的需求,(3)確保增強質量的引導,而不像CFG那樣影響樣本的多樣性或動態效果。欲獲取更多結果,請訪問https://junhahyung.github.io/STGuidance。
English
Diffusion models have emerged as a powerful tool for generating high-quality images, videos, and 3D content. While sampling guidance techniques like CFG improve quality, they reduce diversity and motion. Autoguidance mitigates these issues but demands extra weak model training, limiting its practicality for large-scale models. In this work, we introduce Spatiotemporal Skip Guidance (STG), a simple training-free sampling guidance method for enhancing transformer-based video diffusion models. STG employs an implicit weak model via self-perturbation, avoiding the need for external models or additional training. By selectively skipping spatiotemporal layers, STG produces an aligned, degraded version of the original model to boost sample quality without compromising diversity or dynamic degree. Our contributions include: (1) introducing STG as an efficient, high-performing guidance technique for video diffusion models, (2) eliminating the need for auxiliary models by simulating a weak model through layer skipping, and (3) ensuring quality-enhanced guidance without compromising sample diversity or dynamics unlike CFG. For additional results, visit https://junhahyung.github.io/STGuidance.

Summary

AI-Generated Summary

PDF243December 2, 2024