MMed-RAG:用於醫學視覺語言模型的多功能多模式RAG系統
MMed-RAG: Versatile Multimodal RAG System for Medical Vision Language Models
October 16, 2024
作者: Peng Xia, Kangyu Zhu, Haoran Li, Tianze Wang, Weijia Shi, Sheng Wang, Linjun Zhang, James Zou, Huaxiu Yao
cs.AI
摘要
人工智慧(AI)在醫療保健領域展現了顯著的潛力,特別是在疾病診斷和治療規劃方面。醫學大型視覺語言模型(Med-LVLMs)的最新進展為互動式診斷工具開辟了新的可能性。然而,這些模型常常存在事實幻覺問題,可能導致錯誤診斷。微調和檢索增強生成(RAG)已經成為解決這些問題的方法。然而,高質量數據的量以及訓練數據與部署數據之間的分布偏移限制了微調方法的應用。儘管RAG輕量且有效,現有基於RAG的方法對於不同醫學領域並不足夠通用,可能導致模態之間以及模型與真實情況之間的不一致問題。本文提出了一個多功能多模態RAG系統,MMed-RAG,旨在增強Med-LVLMs的事實性。我們的方法引入了一個具有領域感知的檢索機制、一種自適應檢索上下文選擇方法,以及一種可證明的基於RAG的偏好微調策略。這些創新使得RAG過程足夠通用和可靠,顯著提高了引入檢索上下文時的一致性。在五個醫學數據集(包括放射學、眼科學、病理學)上的醫學VQA和報告生成實驗結果表明,MMed-RAG可以使Med-LVLMs的事實準確性平均提高43.8%。我們的數據和代碼可在https://github.com/richard-peng-xia/MMed-RAG 上獲得。
English
Artificial Intelligence (AI) has demonstrated significant potential in
healthcare, particularly in disease diagnosis and treatment planning. Recent
progress in Medical Large Vision-Language Models (Med-LVLMs) has opened up new
possibilities for interactive diagnostic tools. However, these models often
suffer from factual hallucination, which can lead to incorrect diagnoses.
Fine-tuning and retrieval-augmented generation (RAG) have emerged as methods to
address these issues. However, the amount of high-quality data and distribution
shifts between training data and deployment data limit the application of
fine-tuning methods. Although RAG is lightweight and effective, existing
RAG-based approaches are not sufficiently general to different medical domains
and can potentially cause misalignment issues, both between modalities and
between the model and the ground truth. In this paper, we propose a versatile
multimodal RAG system, MMed-RAG, designed to enhance the factuality of
Med-LVLMs. Our approach introduces a domain-aware retrieval mechanism, an
adaptive retrieved contexts selection method, and a provable RAG-based
preference fine-tuning strategy. These innovations make the RAG process
sufficiently general and reliable, significantly improving alignment when
introducing retrieved contexts. Experimental results across five medical
datasets (involving radiology, ophthalmology, pathology) on medical VQA and
report generation demonstrate that MMed-RAG can achieve an average improvement
of 43.8% in the factual accuracy of Med-LVLMs. Our data and code are available
in https://github.com/richard-peng-xia/MMed-RAG.Summary
AI-Generated Summary