Paper Reading AI Learner

1-2-1: Renaissance of Single-Network Paradigm for Virtual Try-On

2025-01-09 16:49:04
Shuliang Ning, Yipeng Qin, Xiaoguang Han

Abstract

Virtual Try-On (VTON) has become a crucial tool in ecommerce, enabling the realistic simulation of garments on individuals while preserving their original appearance and pose. Early VTON methods relied on single generative networks, but challenges remain in preserving fine-grained garment details due to limitations in feature extraction and fusion. To address these issues, recent approaches have adopted a dual-network paradigm, incorporating a complementary "ReferenceNet" to enhance garment feature extraction and fusion. While effective, this dual-network approach introduces significant computational overhead, limiting its scalability for high-resolution and long-duration image/video VTON applications. In this paper, we challenge the dual-network paradigm by proposing a novel single-network VTON method that overcomes the limitations of existing techniques. Our method, namely MNVTON, introduces a Modality-specific Normalization strategy that separately processes text, image and video inputs, enabling them to share the same attention layers in a VTON network. Extensive experimental results demonstrate the effectiveness of our approach, showing that it consistently achieves higher-quality, more detailed results for both image and video VTON tasks. Our results suggest that the single-network paradigm can rival the performance of dualnetwork approaches, offering a more efficient alternative for high-quality, scalable VTON applications.

Abstract (translated)

虚拟试穿(Virtual Try-On,VTON)已成为电子商务领域的一项关键工具,它能够实现对个体着装的真实模拟,并保留其原有的外观和姿态。早期的VTON方法依赖于单一生成网络,但因特征提取与融合方面的局限性,在保持服装细节方面仍存在挑战。为解决这些问题,近期的方法开始采用双网络范式,引入互补的“ReferenceNet”来增强服装特征的提取和融合。尽管这种方法有效,但它带来了显著的计算开销,限制了其在高分辨率及长时间视频虚拟试穿应用中的扩展性。 本文提出了一种新颖的单网络VTON方法来挑战现有的双网络范式,并克服现有技术的局限。我们的方法称为MNVTON(Modality-specific Normalization VTON),引入了一种模态特定归一化策略,可以分别处理文本、图像和视频输入,使它们能够在一个VTON网络中共享相同的注意力层。 实验结果表明,本方法在图像及视频虚拟试穿任务中均能获得更高质量且细节更加丰富的成果。我们的研究结果显示,单网络范式能够在性能上与双网络方法相匹敌,并提供了一种更为高效的高质、可扩展的VTON应用替代方案。

URL

https://arxiv.org/abs/2501.05369

PDF

https://arxiv.org/pdf/2501.05369.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot