Abstract
In real-world scenarios, many data processing problems often involve heterogeneous images associated with different imaging modalities. Since these multimodal images originate from the same phenomenon, it is realistic to assume that they share common attributes or characteristics. In this paper, we propose a multi-modal image processing framework based on coupled dictionary learning to capture similarities and disparities between different image modalities. In particular, our framework can capture favorable structure similarities across different image modalities such as edges, corners, and other elementary primitives in a learned sparse transform domain, instead of the original pixel domain, that can be used to improve a number of image processing tasks such as denoising, inpainting, or super-resolution. Practical experiments demonstrate that incorporating multimodal information using our framework brings notable benefits.
Abstract (translated)
在现实场景中,许多数据处理问题通常涉及与不同成像模式相关的异构图像。由于这些多模态图像源于相同的现象,因此假定它们具有共同的属性或特性是现实的。在本文中,我们提出了基于耦合词典学习的多模式图像处理框架,以捕捉不同图像模式之间的相似性和差异性。特别是,我们的框架可以在不同的图像模态中捕获有利的结构相似性,例如学习稀疏变换域中的边缘,角点和其他基本图元,而不是原始像素域,可用于改进大量图像处理任务如去噪,修补或超分辨率。实际的实验表明,使用我们的框架结合多种信息带来显着的好处。
URL
https://arxiv.org/abs/1806.09882