AfriMed-QA:一個泛非洲、多專業領域的醫學問答基準數據集。
AfriMed-QA: A Pan-African, Multi-Specialty, Medical Question-Answering Benchmark Dataset
November 23, 2024
作者: Tobi Olatunji, Charles Nimo, Abraham Owodunni, Tassallah Abdullahi, Emmanuel Ayodele, Mardhiyah Sanni, Chinemelu Aka, Folafunmi Omofoye, Foutse Yuehgoh, Timothy Faniran, Bonaventure F. P. Dossou, Moshood Yekini, Jonas Kemp, Katherine Heller, Jude Chidubem Omeke, Chidi Asuzu MD, Naome A. Etori, Aimérou Ndiaye, Ifeoma Okoh, Evans Doe Ocansey, Wendy Kinara, Michael Best, Irfan Essa, Stephen Edward Moore, Chris Fourie, Mercy Nyamewaa Asiedu
cs.AI
摘要
最近大型語言模型(LLM)在醫學多項選擇題(MCQ)基準上的表現進步,引起了全球醫療服務提供者和患者的興趣。特別是在面臨急需醫師和專家短缺的低收入和中等收入國家(LMICs),LLMs提供了一個潛在的可擴展途徑,以增強醫療保健的可及性並降低成本。然而,它們在全球南方地區,尤其是整個非洲大陸的效果仍有待確立。在這項工作中,我們介紹了AfriMed-QA,這是第一個大規模的泛非洲英語多專業醫學問答(QA)數據集,包含來自16個國家的60多所醫學院提供的15,000個問題(開放式和封閉式),涵蓋32個醫學專業。我們進一步評估了30個LLM在多個方面的表現,包括正確性和人口偏見。我們的研究結果顯示,在不同專業和地理位置之間存在顯著的表現差異,MCQ表現明顯落後於USMLE(MedQA)。我們發現生物醫學LLM的表現不如通用模型,而較小的邊緣友好型LLM難以達到及格分數。有趣的是,人類評估顯示,與臨床醫生的答案相比,人們一致偏好LLM的答案和解釋。
English
Recent advancements in large language model(LLM) performance on medical
multiple choice question (MCQ) benchmarks have stimulated interest from
healthcare providers and patients globally. Particularly in low-and
middle-income countries (LMICs) facing acute physician shortages and lack of
specialists, LLMs offer a potentially scalable pathway to enhance healthcare
access and reduce costs. However, their effectiveness in the Global South,
especially across the African continent, remains to be established. In this
work, we introduce AfriMed-QA, the first large scale Pan-African English
multi-specialty medical Question-Answering (QA) dataset, 15,000 questions (open
and closed-ended) sourced from over 60 medical schools across 16 countries,
covering 32 medical specialties. We further evaluate 30 LLMs across multiple
axes including correctness and demographic bias. Our findings show significant
performance variation across specialties and geographies, MCQ performance
clearly lags USMLE (MedQA). We find that biomedical LLMs underperform general
models and smaller edge-friendly LLMs struggle to achieve a passing score.
Interestingly, human evaluations show a consistent consumer preference for LLM
answers and explanations when compared with clinician answers.Summary
AI-Generated Summary