AfriMed-QA:一个泛非洲、多专业领域的医学问答基准数据集。
AfriMed-QA: A Pan-African, Multi-Specialty, Medical Question-Answering Benchmark Dataset
November 23, 2024
作者: Tobi Olatunji, Charles Nimo, Abraham Owodunni, Tassallah Abdullahi, Emmanuel Ayodele, Mardhiyah Sanni, Chinemelu Aka, Folafunmi Omofoye, Foutse Yuehgoh, Timothy Faniran, Bonaventure F. P. Dossou, Moshood Yekini, Jonas Kemp, Katherine Heller, Jude Chidubem Omeke, Chidi Asuzu MD, Naome A. Etori, Aimérou Ndiaye, Ifeoma Okoh, Evans Doe Ocansey, Wendy Kinara, Michael Best, Irfan Essa, Stephen Edward Moore, Chris Fourie, Mercy Nyamewaa Asiedu
cs.AI
摘要
最近大型语言模型(LLM)在医学多项选择题(MCQ)基准上的表现有所提升,引起了全球医疗服务提供者和患者的兴趣。特别是在面临急需医生和缺乏专家的低收入和中等收入国家(LMICs)中,LLMs提供了一种潜在可扩展的途径,以增强医疗保健的可及性并降低成本。然而,它们在全球南方地区,尤其是整个非洲大陆的有效性尚待建立。在这项工作中,我们介绍了AfriMed-QA,第一个大规模的泛非洲英语多专业医学问答(QA)数据集,包括来自16个国家的60多所医学院的15,000个问题(开放式和封闭式),涵盖32个医学专业。我们进一步评估了30个LLM在多个方面的表现,包括正确性和人口统计偏见。我们的研究结果显示,在不同专业和地理位置之间存在显著的表现差异,MCQ的表现明显落后于USMLE(MedQA)。我们发现生物医学LLMs的表现不及一般模型,并且较小的适合边缘的LLMs难以达到及格分数。有趣的是,人类评估显示,与临床医生的答案相比,消费者普遍更喜欢LLM的答案和解释。
English
Recent advancements in large language model(LLM) performance on medical
multiple choice question (MCQ) benchmarks have stimulated interest from
healthcare providers and patients globally. Particularly in low-and
middle-income countries (LMICs) facing acute physician shortages and lack of
specialists, LLMs offer a potentially scalable pathway to enhance healthcare
access and reduce costs. However, their effectiveness in the Global South,
especially across the African continent, remains to be established. In this
work, we introduce AfriMed-QA, the first large scale Pan-African English
multi-specialty medical Question-Answering (QA) dataset, 15,000 questions (open
and closed-ended) sourced from over 60 medical schools across 16 countries,
covering 32 medical specialties. We further evaluate 30 LLMs across multiple
axes including correctness and demographic bias. Our findings show significant
performance variation across specialties and geographies, MCQ performance
clearly lags USMLE (MedQA). We find that biomedical LLMs underperform general
models and smaller edge-friendly LLMs struggle to achieve a passing score.
Interestingly, human evaluations show a consistent consumer preference for LLM
answers and explanations when compared with clinician answers.Summary
AI-Generated Summary