ChatPaper.aiChatPaper

電影生成:媒體基礎模型的演員陣容

Movie Gen: A Cast of Media Foundation Models

October 17, 2024
作者: Adam Polyak, Amit Zohar, Andrew Brown, Andros Tjandra, Animesh Sinha, Ann Lee, Apoorv Vyas, Bowen Shi, Chih-Yao Ma, Ching-Yao Chuang, David Yan, Dhruv Choudhary, Dingkang Wang, Geet Sethi, Guan Pang, Haoyu Ma, Ishan Misra, Ji Hou, Jialiang Wang, Kiran Jagadeesh, Kunpeng Li, Luxin Zhang, Mannat Singh, Mary Williamson, Matt Le, Matthew Yu, Mitesh Kumar Singh, Peizhao Zhang, Peter Vajda, Quentin Duval, Rohit Girdhar, Roshan Sumbaly, Sai Saketh Rambhatla, Sam Tsai, Samaneh Azadi, Samyak Datta, Sanyuan Chen, Sean Bell, Sharadh Ramaswamy, Shelly Sheynin, Siddharth Bhattacharya, Simran Motwani, Tao Xu, Tianhe Li, Tingbo Hou, Wei-Ning Hsu, Xi Yin, Xiaoliang Dai, Yaniv Taigman, Yaqiao Luo, Yen-Cheng Liu, Yi-Chiao Wu, Yue Zhao, Yuval Kirstain, Zecheng He, Zijian He, Albert Pumarola, Ali Thabet, Artsiom Sanakoyeu, Arun Mallya, Baishan Guo, Boris Araya, Breena Kerr, Carleigh Wood, Ce Liu, Cen Peng, Dimitry Vengertsev, Edgar Schonfeld, Elliot Blanchard, Felix Juefei-Xu, Fraylie Nord, Jeff Liang, John Hoffman, Jonas Kohler, Kaolin Fire, Karthik Sivakumar, Lawrence Chen, Licheng Yu, Luya Gao, Markos Georgopoulos, Rashel Moritz, Sara K. Sampson, Shikai Li, Simone Parmeggiani, Steve Fine, Tara Fowler, Vladan Petrovic, Yuming Du
cs.AI

摘要

我們提出了Movie Gen,這是一組基礎模型,可生成具有不同寬高比和同步音頻的高質量、1080p高清視頻。我們還展示了其他功能,如基於精確指令的視頻編輯以及基於用戶圖像生成個性化視頻。我們的模型在多個任務上設定了新的技術水準:文本到視頻合成、視頻個性化、視頻編輯、視頻到音頻生成以及文本到音頻生成。我們最大的視頻生成模型是一個擁有30B參數的Transformer,訓練時最大上下文長度為73K個視頻標記,對應生成的視頻為16秒,每秒16幀。我們展示了在架構、潛在空間、訓練目標和配方、數據整理、評估協議、並行化技術以及推理優化方面的多項技術創新和簡化,這些使我們能夠從擴展預訓練數據、模型大小和訓練計算中受益,以訓練大規模媒體生成模型。我們希望本文能幫助研究界加速媒體生成模型的進步和創新。本文中的所有視頻均可在https://go.fb.me/MovieGenResearchVideos 上找到。
English
We present Movie Gen, a cast of foundation models that generates high-quality, 1080p HD videos with different aspect ratios and synchronized audio. We also show additional capabilities such as precise instruction-based video editing and generation of personalized videos based on a user's image. Our models set a new state-of-the-art on multiple tasks: text-to-video synthesis, video personalization, video editing, video-to-audio generation, and text-to-audio generation. Our largest video generation model is a 30B parameter transformer trained with a maximum context length of 73K video tokens, corresponding to a generated video of 16 seconds at 16 frames-per-second. We show multiple technical innovations and simplifications on the architecture, latent spaces, training objectives and recipes, data curation, evaluation protocols, parallelization techniques, and inference optimizations that allow us to reap the benefits of scaling pre-training data, model size, and training compute for training large scale media generation models. We hope this paper helps the research community to accelerate progress and innovation in media generation models. All videos from this paper are available at https://go.fb.me/MovieGenResearchVideos.

Summary

AI-Generated Summary

PDF992November 16, 2024