视觉编年史:运用多模态大语言模型解析海量图像集
Visual Chronicles: Using Multimodal LLMs to Analyze Massive Collections of Images
April 11, 2025
作者: Boyang Deng, Songyou Peng, Kyle Genova, Gordon Wetzstein, Noah Snavely, Leonidas Guibas, Thomas Funkhouser
cs.AI
摘要
我们提出了一种利用多模态大语言模型(MLLMs)的系统,旨在分析包含数千万张不同时间拍摄图像的大型数据库,以发现时间变化中的模式。具体而言,我们的目标是捕捉城市在特定时期内频繁共现的变化(“趋势”)。与以往的视觉分析不同,我们的分析能够回答开放式查询(例如,“城市中频繁发生的变化类型有哪些?”),而无需任何预定的目标对象或训练标签。这些特性使得先前基于学习或无监督的视觉分析工具不再适用。我们认定MLLMs因其开放式语义理解能力而成为一种新颖工具。然而,我们的数据集规模对于MLLM作为上下文输入来说大了四个数量级。因此,我们引入了一种自下而上的方法,将庞大的视觉分析问题分解为更易处理的子问题。我们精心设计了基于MLLM的解决方案来应对每个子问题。在系统实验和消融研究中,我们发现其显著优于基线方法,并能够从大城市拍摄的图像中发现有趣趋势(例如,“户外餐饮的增加”、“天桥被涂成蓝色”等)。更多结果和互动演示请访问https://boyangdeng.com/visual-chronicles。
English
We present a system using Multimodal LLMs (MLLMs) to analyze a large database
with tens of millions of images captured at different times, with the aim of
discovering patterns in temporal changes. Specifically, we aim to capture
frequent co-occurring changes ("trends") across a city over a certain period.
Unlike previous visual analyses, our analysis answers open-ended queries (e.g.,
"what are the frequent types of changes in the city?") without any
predetermined target subjects or training labels. These properties cast prior
learning-based or unsupervised visual analysis tools unsuitable. We identify
MLLMs as a novel tool for their open-ended semantic understanding capabilities.
Yet, our datasets are four orders of magnitude too large for an MLLM to ingest
as context. So we introduce a bottom-up procedure that decomposes the massive
visual analysis problem into more tractable sub-problems. We carefully design
MLLM-based solutions to each sub-problem. During experiments and ablation
studies with our system, we find it significantly outperforms baselines and is
able to discover interesting trends from images captured in large cities (e.g.,
"addition of outdoor dining,", "overpass was painted blue," etc.). See more
results and interactive demos at https://boyangdeng.com/visual-chronicles.Summary
AI-Generated Summary