Paper Reading AI Learner

Deep Gaussian mixture model for unsupervised image segmentation

2024-04-18 15:20:59
Matthias Schwab, Agnes Mayr, Markus Haltmeier

Abstract

The recent emergence of deep learning has led to a great deal of work on designing supervised deep semantic segmentation algorithms. As in many tasks sufficient pixel-level labels are very difficult to obtain, we propose a method which combines a Gaussian mixture model (GMM) with unsupervised deep learning techniques. In the standard GMM the pixel values with each sub-region are modelled by a Gaussian distribution. In order to identify the different regions, the parameter vector that minimizes the negative log-likelihood (NLL) function regarding the GMM has to be approximated. For this task, usually iterative optimization methods such as the expectation-maximization (EM) algorithm are used. In this paper, we propose to estimate these parameters directly from the image using a convolutional neural network (CNN). We thus change the iterative procedure in the EM algorithm replacing the expectation-step by a gradient-step with regard to the networks parameters. This means that the network is trained to minimize the NLL function of the GMM which comes with at least two advantages. As once trained, the network is able to predict label probabilities very quickly compared with time consuming iterative optimization methods. Secondly, due to the deep image prior our method is able to partially overcome one of the main disadvantages of GMM, which is not taking into account correlation between neighboring pixels, as it assumes independence between them. We demonstrate the advantages of our method in various experiments on the example of myocardial infarct segmentation on multi-sequence MRI images.

Abstract (translated)

近年来,深度学习的出现导致了许多关于设计有监督深度语义分割算法的辛勤工作。由于在许多任务中,获得足够的像素级标签非常困难,我们提出了一种将高斯混合模型(GMM)与无监督深度学习技术相结合的方法。在标准GMM中,每个子区域的像素值由高斯分布建模。为了确定不同区域,关于GMM的最小负对数(NLL)函数的参数向量必须近似。对于这项任务,通常使用迭代优化方法(如期望最大(EM)算法)进行优化。在本文中,我们提出了一种直接从图像中使用卷积神经网络(CNN)估计这些参数的方法。我们因此用网络参数的梯度代替了EM算法中的期望步骤。这意味着网络训练以最小化GMM的NLL函数,这具有至少两个优点。一旦训练完成,与时间耗费的迭代优化方法相比,网络能够非常快速地预测标签概率。其次,由于深度图像先验,我们的方法能够部分克服GMM的一个主要缺陷,即没有考虑到邻近像素之间的相关性。我们在多序列MRI图像上对心肌梗死分割进行各种实验,以展示我们方法的优势。

URL

https://arxiv.org/abs/2404.12252

PDF

https://arxiv.org/pdf/2404.12252.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot