Abstract
Video object segmentation is crucial for the efficient analysis of complex medical video data, yet it faces significant challenges in data availability and annotation. We introduce the task of one-shot medical video object segmentation, which requires separating foreground and background pixels throughout a video given only the mask annotation of the first frame. To address this problem, we propose a temporal contrastive memory network comprising image and mask encoders to learn feature representations, a temporal contrastive memory bank that aligns embeddings from adjacent frames while pushing apart distant ones to explicitly model inter-frame relationships and stores these features, and a decoder that fuses encoded image features and memory readouts for segmentation. We also collect a diverse, multi-source medical video dataset spanning various modalities and anatomies to benchmark this task. Extensive experiments demonstrate state-of-the-art performance in segmenting both seen and unseen structures from a single exemplar, showing ability to generalize from scarce labels. This highlights the potential to alleviate annotation burdens for medical video analysis. Code is available at this https URL.
Abstract (translated)
视频对象分割在复杂医学视频数据的高效分析中至关重要,但其面临着数据可用性和标注方面的重大挑战。我们提出了单样本医学视频对象分割任务,该任务仅基于第一帧的掩码标注来区分整个视频中的前景和背景像素。为解决这一问题,我们提出了一种包含图像编码器和掩码编码器以学习特征表示、时间对比记忆库(Temporal Contrastive Memory Bank)以对齐相邻帧之间的嵌入并拉开不相关帧之间距离以便显式建模帧间关系,并存储这些特征的网络架构。此外,还有一个解码器用于融合编码后的图像特征与记忆库读取的内容来进行分割。 为了评估这一任务,我们收集了一个多样化的、多源医学视频数据集,涵盖各种模式和解剖结构的数据,以作为基准测试。广泛的实验展示了在单个示例的情况下对已见和未见结构进行分割的最先进的性能,这表明了从稀缺标签中泛化的能力。这项研究强调了解决标注负担对于医学视频分析具有潜在的作用。 代码可在提供的链接获取:[此URL](请将方括号中的内容替换为实际的URL)。
URL
https://arxiv.org/abs/2503.14979