Abstract
The recent emergence of deep learning has led to a great deal of work on designing supervised deep semantic segmentation algorithms. As in many tasks sufficient pixel-level labels are very difficult to obtain, we propose a method which combines a Gaussian mixture model (GMM) with unsupervised deep learning techniques. In the standard GMM the pixel values with each sub-region are modelled by a Gaussian distribution. In order to identify the different regions, the parameter vector that minimizes the negative log-likelihood (NLL) function regarding the GMM has to be approximated. For this task, usually iterative optimization methods such as the expectation-maximization (EM) algorithm are used. In this paper, we propose to estimate these parameters directly from the image using a convolutional neural network (CNN). We thus change the iterative procedure in the EM algorithm replacing the expectation-step by a gradient-step with regard to the networks parameters. This means that the network is trained to minimize the NLL function of the GMM which comes with at least two advantages. As once trained, the network is able to predict label probabilities very quickly compared with time consuming iterative optimization methods. Secondly, due to the deep image prior our method is able to partially overcome one of the main disadvantages of GMM, which is not taking into account correlation between neighboring pixels, as it assumes independence between them. We demonstrate the advantages of our method in various experiments on the example of myocardial infarct segmentation on multi-sequence MRI images.
Abstract (translated)
近年来,深度学习的出现导致了许多关于设计有监督深度语义分割算法的辛勤工作。由于在许多任务中,获得足够的像素级标签非常困难,我们提出了一种将高斯混合模型(GMM)与无监督深度学习技术相结合的方法。在标准GMM中,每个子区域的像素值由高斯分布建模。为了确定不同区域,关于GMM的最小负对数(NLL)函数的参数向量必须近似。对于这项任务,通常使用迭代优化方法(如期望最大(EM)算法)进行优化。在本文中,我们提出了一种直接从图像中使用卷积神经网络(CNN)估计这些参数的方法。我们因此用网络参数的梯度代替了EM算法中的期望步骤。这意味着网络训练以最小化GMM的NLL函数,这具有至少两个优点。一旦训练完成,与时间耗费的迭代优化方法相比,网络能够非常快速地预测标签概率。其次,由于深度图像先验,我们的方法能够部分克服GMM的一个主要缺陷,即没有考虑到邻近像素之间的相关性。我们在多序列MRI图像上对心肌梗死分割进行各种实验,以展示我们方法的优势。
URL
https://arxiv.org/abs/2404.12252