Abstract
The MEDIQA-M3G 2024 challenge necessitates novel solutions for Multilingual & Multimodal Medical Answer Generation in dermatology (wai Yim et al., 2024a). This paper addresses the limitations of traditional methods by proposing a weakly supervised learning approach for open-ended medical question-answering (QA). Our system leverages readily available MEDIQA-M3G images via a VGG16-CNN-SVM model, enabling multilingual (English, Chinese, Spanish) learning of informative skin condition representations. Using pre-trained QA models, we further bridge the gap between visual and textual information through multimodal fusion. This approach tackles complex, open-ended questions even without predefined answer choices. We empower the generation of comprehensive answers by feeding the ViT-CLIP model with multiple responses alongside images. This work advances medical QA research, paving the way for clinical decision support systems and ultimately improving healthcare delivery.
Abstract (translated)
MEDIQA-M3G 2024 挑战需要为皮肤病多语种及多模态医疗答案生成提供新颖解决方案(Yim et al., 2024a)。本文通过提出一个弱监督学习方法来解决传统方法的局限性,为开放性问题医疗答案生成(QA)提供了新思路。我们的系统通过VGG16-CNN-SVM模型利用可用的MEDIQA-M3G图像,实现了多语言(英语,汉语,西班牙语)学习有用的皮肤病表示。通过预训练的 QA 模型,我们通过多模态融合进一步弥合视觉和文本信息之间的差距。这种方法在未定义答案选择的情况下处理复杂、开放性问题。通过在图像旁边提供多个回答,我们通过ViT-CLIP模型生成了全面的答案。这项工作推动了医疗 QA 研究的发展,为临床决策支持系统和最终提高 healthcare delivery 奠定了基础。
URL
https://arxiv.org/abs/2405.01583