Abstract
This short technical report demonstrates a simple technique that yields state of the art results in medical image-text matching tasks. We analyze the use of OpenAI's CLIP, a general image-text matching model, and observe that CLIP's limited textual input size has negative impact on downstream performance in the medical domain where encoding longer textual contexts is often required. We thus train and release ClipMD, which is trained with a simple sliding window technique to encode textual captions. ClipMD was tested on two medical image-text datasets and compared with other image-text matching models. The results show that ClipMD outperforms other models on both datasets by a large margin. We make our code and pretrained model publicly available.
Abstract (translated)
这段简短的技术报告展示了一种简单的技术,可以在医学图像-文本匹配任务中获得最先进的结果。我们分析了OpenAI的Clip,一个通用的图像-文本匹配模型,并观察了Clip有限的文字输入大小的消极影响,因为在医学领域中,通常需要编码更长的文字上下文。因此,我们训练并发布了ClipMD,它是通过一个简单的滑动窗口技术编码文本标题的训练方法。ClipMD对两个医学图像-文本数据集进行了测试,并与其他图像-文本匹配模型进行了比较。结果表明,ClipMD在两个数据集上比其他模型表现更好。我们公开发布了我们的代码和预训练模型。
URL
https://arxiv.org/abs/2303.13340