Abstract
Breast ultrasound videos contain richer information than ultrasound images, therefore it is more meaningful to develop video models for this diagnosis task. However, the collection of ultrasound video datasets is much harder. In this paper, we explore the feasibility of enhancing the performance of ultrasound video classification using the static image dataset. To this end, we propose KGA-Net and coherence loss. The KGA-Net adopts both video clips and static images to train the network. The coherence loss uses the feature centers generated by the static images to guide the frame attention in the video model. Our KGA-Net boosts the performance on the public BUSV dataset by a large margin. The visualization results of frame attention prove the explainability of our method. The codes and model weights of our method will be made publicly available.
Abstract (translated)
breast ultrasound videos 包含比 ultrasound 图像更丰富的信息,因此开发 video models 对这个诊断任务更有意义。然而,收集 ultrasound video datasets 非常困难。在本文中,我们探讨了使用静态图像数据集来提高 ultrasound video 分类性能的可行性。为此,我们提出了 KGA-Net 和 coherence loss。KGA-Net 采用视频片段和静态图像来训练网络。 coherence loss 使用静态图像生成的特征中心来指导 video model 中的帧注意力。我们的 KGA-Net 在 public BUSV 数据集上的性能显著提高。帧注意力的可视化结果证明了我们的方法的可解释性。我们的方法的代码和模型权重将公开发布。
URL
https://arxiv.org/abs/2306.06877