Paper Reading AI Learner

Zero-shot System for Automatic Body Region Detection for Volumetric CT and MR Images

2026-02-09 14:26:24
Farnaz Khun Jush, Grit Werner, Mark Klemens, Matthias Lenga

Abstract

Reliable identification of anatomical body regions is a prerequisite for many automated medical imaging workflows, yet existing solutions remain heavily dependent on unreliable DICOM metadata. Current solutions mainly use supervised learning, which limits their applicability in many real-world scenarios. In this work, we investigate whether body region detection in volumetric CT and MR images can be achieved in a fully zero-shot manner by using knowledge embedded in large pre-trained foundation models. We propose and systematically evaluate three training-free pipelines: (1) a segmentation-driven rule-based system leveraging pre-trained multi-organ segmentation models, (2) a Multimodal Large Language Model (MLLM) guided by radiologist-defined rules, and (3) a segmentation-aware MLLM that combines visual input with explicit anatomical evidence. All methods are evaluated on 887 heterogeneous CT and MR scans with manually verified anatomical region labels. The segmentation-driven rule-based approach achieves the strongest and most consistent performance, with weighted F1-scores of 0.947 (CT) and 0.914 (MR), demonstrating robustness across modalities and atypical scan coverage. The MLLM performs competitively in visually distinctive regions, while the segmentation-aware MLLM reveals fundamental limitations.

Abstract (translated)

可靠的解剖区域识别是许多自动化医学影像工作流程的前提,但现有的解决方案仍然严重依赖不可靠的DICOM元数据。目前的解决方案主要采用监督学习方法,这限制了它们在许多实际场景中的适用性。在这项工作中,我们研究了是否可以通过使用大型预训练基础模型中嵌入的知识,在全零样本方式下实现体素CT和MR图像中解剖区域检测。我们提出并系统地评估了三种无训练的管道:(1)一个基于规则的分割驱动系统,利用预训练的多器官分割模型;(2)由放射科医生定义规则指导的多模态大型语言模型(MLLM);以及 (3) 结合视觉输入和显式解剖学证据的感知分割的多模态大型语言模型。所有方法都在887个具有手动验证解剖区域标签的异质CT和MR扫描上进行了评估。基于规则的分割驱动方法取得了最强且最一致的表现,加权F1分数分别为0.947(CT)和0.914(MR),展示了跨模态和非典型扫描覆盖范围的强大适应性。MLLM在视觉区分明显的区域表现出竞争力,而感知分割的多模态大型语言模型揭示了基础限制。

URL

https://arxiv.org/abs/2602.08717

PDF

https://arxiv.org/pdf/2602.08717.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot