Paper Reading AI Learner

Test Time Training for 4D Medical Image Interpolation

2025-02-04 14:19:16
Qikang Zhang, Yingjie Lei, Zihao Zheng, Ziyang Chen, Zhonghao Xie

Abstract

4D medical image interpolation is essential for improving temporal resolution and diagnostic precision in clinical applications. Previous works ignore the problem of distribution shifts, resulting in poor generalization under different distribution. A natural solution would be to adapt the model to a new test distribution, but this cannot be done if the test input comes without a ground truth label. In this paper, we propose a novel test time training framework which uses self-supervision to adapt the model to a new distribution without requiring any labels. Indeed, before performing frame interpolation on each test video, the model is trained on the same instance using a self-supervised task, such as rotation prediction or image reconstruction. We conduct experiments on two publicly available 4D medical image interpolation datasets, Cardiac and 4D-Lung. The experimental results show that the proposed method achieves significant performance across various evaluation metrics on both datasets. It achieves higher peak signal-to-noise ratio values, 33.73dB on Cardiac and 34.02dB on 4D-Lung. Our method not only advances 4D medical image interpolation but also provides a template for domain adaptation in other fields such as image segmentation and image registration.

Abstract (translated)

四维医学图像插值在提高临床应用中的时间分辨率和诊断精度方面至关重要。以往的研究忽略了分布变化的问题,导致模型在不同数据分布下的泛化能力较差。一个自然的解决方案是将模型适应到新的测试分布上,但这在没有真实标签的情况下无法实现。为此,在本文中,我们提出了一种新颖的测试时训练框架,该框架利用自监督学习使模型能够在无需任何标签的情况下适应新分布。具体而言,在对每个测试视频进行帧插值之前,模型会使用诸如旋转预测或图像重建等自监督任务在相同实例上接受培训。 我们在两个公开可用的四维医学图像插值数据集(心脏和4D肺部)上进行了实验。实验证明,所提出的方法在各个评估指标中均表现出显著性能,并且在卡迪亚克数据集中实现了33.73dB的峰值信噪比,在4D-Lung数据集中达到了34.02dB。 我们的方法不仅推动了四维医学图像插值技术的发展,还为其他领域如图像分割和图像配准中的域适应提供了模板。

URL

https://arxiv.org/abs/2502.02341

PDF

https://arxiv.org/pdf/2502.02341.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot