TimeChat-Online:串流影片中80%的視覺標記自然具有冗餘性
TimeChat-Online: 80% Visual Tokens are Naturally Redundant in Streaming Videos
April 24, 2025
作者: Linli Yao, Yicheng Li, Yuancheng Wei, Lei Li, Shuhuai Ren, Yuanxin Liu, Kun Ouyang, Lean Wang, Shicheng Li, Sida Li, Lingpeng Kong, Qi Liu, Yuanxing Zhang, Xu Sun
cs.AI
摘要
線上影音平台,尤其是直播服務的快速發展,催生了對即時影片理解系統的迫切需求。這些系統必須處理連續的影片串流,並即時回應使用者查詢,這對現有的影片大型語言模型(VideoLLMs)提出了獨特的挑戰。雖然現有的VideoLLMs在處理完整影片方面表現出色,但在串流場景中卻面臨顯著限制,主要是因為它們無法有效處理密集且冗餘的影格。我們推出了TimeChat-Online,這是一款革命性的線上VideoLLM,旨在革新即時影片互動。其核心是我們創新的差分令牌丟棄(DTD)模組,該模組解決了串流影片中視覺冗餘的根本挑戰。DTD從人類視覺感知的「變化盲視」現象中汲取靈感,保留了有意義的時間變化,同時過濾掉影格之間的靜態冗餘內容。值得注意的是,我們的實驗表明,DTD在StreamingBench上實現了82.8%的影片令牌減少,同時保持了98%的性能,這揭示了串流影片中超過80%的視覺內容在無需語言指導的情況下自然冗餘。為了實現無縫的即時互動,我們推出了TimeChat-Online-139K,這是一個全面的串流影片數據集,涵蓋了多種互動模式,包括回溯、當前感知和未來回應場景。TimeChat-Online獨有的主動回應能力,通過DTD持續監控影片場景轉換自然實現,使其與傳統方法區分開來。我們廣泛的評估顯示,TimeChat-Online在串流基準測試(StreamingBench和OvOBench)上表現優異,並在長影片任務(如Video-MME和MLVU)上保持了競爭力的結果。
English
The rapid growth of online video platforms, particularly live streaming
services, has created an urgent need for real-time video understanding systems.
These systems must process continuous video streams and respond to user queries
instantaneously, presenting unique challenges for current Video Large Language
Models (VideoLLMs). While existing VideoLLMs excel at processing complete
videos, they face significant limitations in streaming scenarios due to their
inability to handle dense, redundant frames efficiently. We introduce
TimeChat-Online, a novel online VideoLLM that revolutionizes real-time video
interaction. At its core lies our innovative Differential Token Drop (DTD)
module, which addresses the fundamental challenge of visual redundancy in
streaming videos. Drawing inspiration from human visual perception's Change
Blindness phenomenon, DTD preserves meaningful temporal changes while filtering
out static, redundant content between frames. Remarkably, our experiments
demonstrate that DTD achieves an 82.8% reduction in video tokens while
maintaining 98% performance on StreamingBench, revealing that over 80% of
visual content in streaming videos is naturally redundant without requiring
language guidance. To enable seamless real-time interaction, we present
TimeChat-Online-139K, a comprehensive streaming video dataset featuring diverse
interaction patterns including backward-tracing, current-perception, and
future-responding scenarios. TimeChat-Online's unique Proactive Response
capability, naturally achieved through continuous monitoring of video scene
transitions via DTD, sets it apart from conventional approaches. Our extensive
evaluation demonstrates TimeChat-Online's superior performance on streaming
benchmarks (StreamingBench and OvOBench) and maintaining competitive results on
long-form video tasks such as Video-MME and MLVU.Summary
AI-Generated Summary