ViSMaP:基於元提示的無監督長時影片摘要生成
ViSMaP: Unsupervised Hour-long Video Summarisation by Meta-Prompting
April 22, 2025
作者: Jian Hu, Dimitrios Korkinof, Shaogang Gong, Mariano Beguerisse-Diaz
cs.AI
摘要
我們介紹了ViSMap:基於元提示的無監督視頻摘要系統,這是一個能夠在無監督情況下對長達一小時的視頻進行摘要的系統。現有的大多數視頻理解模型在處理預先分割的短視頻事件時表現良好,但在處理相關事件稀疏分佈且未經預先分割的長視頻時卻顯得力不從心。此外,長視頻理解通常依賴於需要大量註釋的監督式分層訓練,這些註釋成本高、耗時長且容易出現不一致性。通過ViSMaP,我們彌補了短視頻(註釋數據豐富)與長視頻(註釋數據匱乏)之間的差距。我們利用大型語言模型(LLMs)基於短視頻片段描述生成長視頻的優化偽摘要。這些偽摘要被用作訓練數據,用於生成長視頻摘要的模型,從而繞過了對長視頻進行昂貴註釋的需求。具體來說,我們採用了一種元提示策略,迭代生成並優化長視頻的偽摘要。該策略利用從監督式短視頻模型中獲得的短片段描述來指導摘要的生成。每次迭代都依次使用三個LLMs:一個用於根據片段描述生成偽摘要,另一個用於評估該摘要,第三個則用於優化生成器的提示。這種迭代是必要的,因為偽摘要的質量高度依賴於生成器的提示,並且在不同視頻之間差異很大。我們在多個數據集上對我們的摘要進行了廣泛評估;結果表明,ViSMaP在跨領域泛化且不犧牲性能的情況下,達到了與全監督最先進模型相當的性能。代碼將在論文發表後公開。
English
We introduce ViSMap: Unsupervised Video Summarisation by Meta Prompting, a
system to summarise hour long videos with no-supervision. Most existing video
understanding models work well on short videos of pre-segmented events, yet
they struggle to summarise longer videos where relevant events are sparsely
distributed and not pre-segmented. Moreover, long-form video understanding
often relies on supervised hierarchical training that needs extensive
annotations which are costly, slow and prone to inconsistency. With ViSMaP we
bridge the gap between short videos (where annotated data is plentiful) and
long ones (where it's not). We rely on LLMs to create optimised
pseudo-summaries of long videos using segment descriptions from short ones.
These pseudo-summaries are used as training data for a model that generates
long-form video summaries, bypassing the need for expensive annotations of long
videos. Specifically, we adopt a meta-prompting strategy to iteratively
generate and refine creating pseudo-summaries of long videos. The strategy
leverages short clip descriptions obtained from a supervised short video model
to guide the summary. Each iteration uses three LLMs working in sequence: one
to generate the pseudo-summary from clip descriptions, another to evaluate it,
and a third to optimise the prompt of the generator. This iteration is
necessary because the quality of the pseudo-summaries is highly dependent on
the generator prompt, and varies widely among videos. We evaluate our summaries
extensively on multiple datasets; our results show that ViSMaP achieves
performance comparable to fully supervised state-of-the-art models while
generalising across domains without sacrificing performance. Code will be
released upon publication.Summary
AI-Generated Summary