用於遙感變化檢測的狀態空間模型變更
Change State Space Models for Remote Sensing Change Detection
April 15, 2025
作者: Elman Ghazaei, Erchan Aptoula
cs.AI
摘要
儘管卷積神經網絡(ConvNets)和視覺變換器(ViT)在變化檢測中頻繁使用,但它們都存在著眾所周知的侷限性:前者難以建模長距離依賴關係,而後者則計算效率低下,這使得它們在大規模數據集上的訓練變得具有挑戰性。基於狀態空間模型的視覺Mamba架構應運而生,作為一種替代方案,它解決了上述缺陷,並已應用於遙感變化檢測,儘管主要作為特徵提取的骨幹網絡。本文介紹了專門為變化檢測設計的變化狀態空間模型,該模型通過專注於雙時相圖像之間的相關變化,有效過濾掉不相關信息。通過僅關注變化的特徵,減少了網絡參數的數量,顯著提升了計算效率,同時保持了高檢測性能和對輸入退化的魯棒性。所提出的模型已在三個基準數據集上進行了評估,結果顯示其在計算複雜度僅為一小部分的情況下,性能優於ConvNets、ViTs以及基於Mamba的對比模型。該實現將在論文接受後於https://github.com/Elman295/CSSM公開。
English
Despite their frequent use for change detection, both ConvNets and Vision
transformers (ViT) exhibit well-known limitations, namely the former struggle
to model long-range dependencies while the latter are computationally
inefficient, rendering them challenging to train on large-scale datasets.
Vision Mamba, an architecture based on State Space Models has emerged as an
alternative addressing the aforementioned deficiencies and has been already
applied to remote sensing change detection, though mostly as a feature
extracting backbone. In this article the Change State Space Model is
introduced, that has been specifically designed for change detection by
focusing on the relevant changes between bi-temporal images, effectively
filtering out irrelevant information. By concentrating solely on the changed
features, the number of network parameters is reduced, enhancing significantly
computational efficiency while maintaining high detection performance and
robustness against input degradation. The proposed model has been evaluated via
three benchmark datasets, where it outperformed ConvNets, ViTs, and Mamba-based
counterparts at a fraction of their computational complexity. The
implementation will be made available at https://github.com/Elman295/CSSM upon
acceptance.Summary
AI-Generated Summary