Abstract
Virtual Try-On (VTON) has become a crucial tool in ecommerce, enabling the realistic simulation of garments on individuals while preserving their original appearance and pose. Early VTON methods relied on single generative networks, but challenges remain in preserving fine-grained garment details due to limitations in feature extraction and fusion. To address these issues, recent approaches have adopted a dual-network paradigm, incorporating a complementary "ReferenceNet" to enhance garment feature extraction and fusion. While effective, this dual-network approach introduces significant computational overhead, limiting its scalability for high-resolution and long-duration image/video VTON applications. In this paper, we challenge the dual-network paradigm by proposing a novel single-network VTON method that overcomes the limitations of existing techniques. Our method, namely MNVTON, introduces a Modality-specific Normalization strategy that separately processes text, image and video inputs, enabling them to share the same attention layers in a VTON network. Extensive experimental results demonstrate the effectiveness of our approach, showing that it consistently achieves higher-quality, more detailed results for both image and video VTON tasks. Our results suggest that the single-network paradigm can rival the performance of dualnetwork approaches, offering a more efficient alternative for high-quality, scalable VTON applications.
Abstract (translated)
虚拟试穿(Virtual Try-On,VTON)已成为电子商务领域的一项关键工具,它能够实现对个体着装的真实模拟,并保留其原有的外观和姿态。早期的VTON方法依赖于单一生成网络,但因特征提取与融合方面的局限性,在保持服装细节方面仍存在挑战。为解决这些问题,近期的方法开始采用双网络范式,引入互补的“ReferenceNet”来增强服装特征的提取和融合。尽管这种方法有效,但它带来了显著的计算开销,限制了其在高分辨率及长时间视频虚拟试穿应用中的扩展性。 本文提出了一种新颖的单网络VTON方法来挑战现有的双网络范式,并克服现有技术的局限。我们的方法称为MNVTON(Modality-specific Normalization VTON),引入了一种模态特定归一化策略,可以分别处理文本、图像和视频输入,使它们能够在一个VTON网络中共享相同的注意力层。 实验结果表明,本方法在图像及视频虚拟试穿任务中均能获得更高质量且细节更加丰富的成果。我们的研究结果显示,单网络范式能够在性能上与双网络方法相匹敌,并提供了一种更为高效的高质、可扩展的VTON应用替代方案。
URL
https://arxiv.org/abs/2501.05369