Abstract
We aim to leverage diffusion to address the challenging image matting task. However, the presence of high computational overhead and the inconsistency of noise sampling between the training and inference processes pose significant obstacles to achieving this goal. In this paper, we present DiffMatte, a solution designed to effectively overcome these challenges. First, DiffMatte decouples the decoder from the intricately coupled matting network design, involving only one lightweight decoder in the iterations of the diffusion process. With such a strategy, DiffMatte mitigates the growth of computational overhead as the number of samples increases. Second, we employ a self-aligned training strategy with uniform time intervals, ensuring a consistent noise sampling between training and inference across the entire time domain. Our DiffMatte is designed with flexibility in mind and can seamlessly integrate into various modern matting architectures. Extensive experimental results demonstrate that DiffMatte not only reaches the state-of-the-art level on the Composition-1k test set, surpassing the best methods in the past by 5% and 15% in the SAD metric and MSE metric respectively, but also show stronger generalization ability in other benchmarks.
Abstract (translated)
我们希望通过扩散来解决具有挑战性的图像配准任务。然而,高计算开销和训练和推理过程中噪声抽样的不一致性构成了实现这一目标的巨大障碍。在本文中,我们提出了DiffMatte,一种旨在有效克服这些挑战的解决方案。首先,DiffMatte解耦了解码器与复杂耦合的配准网络设计,仅在扩散过程的迭代中使用了一个轻量级的解码器。采用这种方式,DiffMatte可以减轻随着样本数量增加而产生的计算开销的增长。其次,我们采用自对齐的训练策略,具有均匀的时间间隔,确保训练和推理过程中在整个时域内具有一致的噪声采样。我们的DiffMatte设计时考虑了灵活性,可以轻松地整合到各种现代配准架构中。大量实验结果表明,DiffMatte不仅在Composition-1k测试集上达到了最先进的水平,超越了过去最佳方法的5%和15%,而且在其他基准测试中表现出更强的泛化能力。
URL
https://arxiv.org/abs/2312.05915