Abstract
In the past ten years, with the help of deep learning, especially the rapid development of deep neural networks, medical image analysis has made remarkable progress. However, how to effectively use the relational information between various tissues or organs in medical images is still a very challenging problem, and it has not been fully studied. In this thesis, we propose two novel solutions to this problem based on deep relational learning. First, we propose a context-aware fully convolutional network that effectively models implicit relation information between features to perform medical image segmentation. The network achieves the state-of-the-art segmentation results on the Multi Modal Brain Tumor Segmentation 2017 (BraTS2017) and Multi Modal Brain Tumor Segmentation 2018 (BraTS2018) data sets. Subsequently, we propose a new hierarchical homography estimation network to achieve accurate medical image mosaicing by learning the explicit spatial relationship between adjacent frames. We use the UCL Fetoscopy Placenta dataset to conduct experiments and our hierarchical homography estimation network outperforms the other state-of-the-art mosaicing methods while generating robust and meaningful mosaicing result on unseen frames.
Abstract (translated)
在过去的十年中,借助深度学习,特别是深度神经网络的迅速发展,医学图像分析取得了显著进展。然而,如何有效地利用医学图像中各种组织和器官之间的隐含关系仍然是一个极具挑战性的问题,并尚未得到充分研究。在本文中,我们提出了基于深度学习关系深度学习的两个创新解决方案。首先,我们提出了一种具有上下文意识的全卷积神经网络,有效地模型了特征之间的隐含关系信息,以进行医学图像分割。该网络在Multimodal Brain Tumor Segmentation 2017( BraTS2017)和Multimodal Brain Tumor Segmentation 2018( BraTS2018)数据集上取得了最先进的分割结果。随后,我们提出了一种新的层级基元估计网络,以通过学习相邻帧之间的明确空间关系实现准确的医学图像拼贴,我们使用UCL鲸鱼超声波 dataset进行了实验,我们的层级基元估计网络在未观测帧上的拼贴结果表现优异,同时生成稳健且有意义的拼贴结果。
URL
https://arxiv.org/abs/2303.16099