Video-3D LLM:學習位置感知的視頻表示以理解3D場景
Video-3D LLM: Learning Position-Aware Video Representation for 3D Scene Understanding
November 30, 2024
作者: Duo Zheng, Shijia Huang, Liwei Wang
cs.AI
摘要
多模式大型語言模型(MLLMs)的快速發展顯著影響了各種多模式任務。然而,這些模型在需要對3D環境內的空間理解的任務中面臨挑戰。為增強MLLMs的努力,例如整合點雲特徵,已經開展,但模型學習表示與3D場景固有複雜性之間仍存在相當大的差距。這種差異主要源於MLLMs在主要為2D數據進行訓練,這限制了它們在理解3D空間方面的效果。為解決這個問題,在本文中,我們提出了一種新穎的通用模型,即Video-3D LLM,用於3D場景理解。通過將3D場景視為動態視頻,並將3D位置編碼納入這些表示中,我們的Video-3D LLM能夠更準確地將視頻表示與現實世界的空間背景相吻合。此外,我們實施了最大覆蓋抽樣技術,以優化計算成本與性能效率之間的平衡。大量實驗表明,我們的模型在多個3D場景理解基準測試中取得了最先進的性能,包括ScanRefer、Multi3DRefer、Scan2Cap、ScanQA和SQA3D。
English
The rapid advancement of Multimodal Large Language Models (MLLMs) has
significantly impacted various multimodal tasks. However, these models face
challenges in tasks that require spatial understanding within 3D environments.
Efforts to enhance MLLMs, such as incorporating point cloud features, have been
made, yet a considerable gap remains between the models' learned
representations and the inherent complexity of 3D scenes. This discrepancy
largely stems from the training of MLLMs on predominantly 2D data, which
restricts their effectiveness in comprehending 3D spaces. To address this
issue, in this paper, we propose a novel generalist model, i.e., Video-3D LLM,
for 3D scene understanding. By treating 3D scenes as dynamic videos and
incorporating 3D position encoding into these representations, our Video-3D LLM
aligns video representations with real-world spatial contexts more accurately.
Additionally, we have implemented a maximum coverage sampling technique to
optimize the balance between computational costs and performance efficiency.
Extensive experiments demonstrate that our model achieves state-of-the-art
performance on several 3D scene understanding benchmarks, including ScanRefer,
Multi3DRefer, Scan2Cap, ScanQA, and SQA3D.Summary
AI-Generated Summary