Paper Reading AI Learner

CPNet: Exploiting CLIP-based Attention Condenser and Probability Map Guidance for High-fidelity Talking Face Generation

2023-05-23 11:40:43
Jingning Xu, Benlai Tang, Mingjie Wang, Minghao Li, Meirong Ma

Abstract

Recently, talking face generation has drawn ever-increasing attention from the research community in computer vision due to its arduous challenges and widespread application scenarios, e.g. movie animation and virtual anchor. Although persevering efforts have been undertaken to enhance the fidelity and lip-sync quality of generated talking face videos, there is still large room for further improvements of synthesis quality and efficiency. Actually, these attempts somewhat ignore the explorations of fine-granularity feature extraction/integration and the consistency between probability distributions of landmarks, thereby recurring the issues of local details blurring and degraded fidelity. To mitigate these dilemmas, in this paper, a novel CLIP-based Attention and Probability Map Guided Network (CPNet) is delicately designed for inferring high-fidelity talking face videos. Specifically, considering the demands of fine-grained feature recalibration, a clip-based attention condenser is exploited to transfer knowledge with rich semantic priors from the prevailing CLIP model. Moreover, to guarantee the consistency in probability space and suppress the landmark ambiguity, we creatively propose the density map of facial landmark as auxiliary supervisory signal to guide the landmark distribution learning of generated frame. Extensive experiments on the widely-used benchmark dataset demonstrate the superiority of our CPNet against state of the arts in terms of image and lip-sync quality. In addition, a cohort of studies are also conducted to ablate the impacts of the individual pivotal components.

Abstract (translated)

近年来,对话脸生成技术因为具有挑战性的任务和广泛的应用场景,例如电影动画和虚拟主持人而引起了越来越多的计算机视觉研究 community 的关注。尽管已经做出了不断尝试来改进生成的对话脸视频的清晰度和音轨同步质量,但仍然有大量的改进空间来提高合成质量和效率。实际上,这些尝试在某种程度上忽视了精细特征提取/整合和地标概率分布一致性的探索,从而导致了 local 细节模糊和清晰度降低的问题。为了缓解这些困境,在本文中,我们设计了一种基于 CLIP 的注意力和概率图引导网络(CPNet),专门用于推断高清晰度对话脸视频。具体来说,考虑到精细特征重排的需求,我们利用片段式注意力凝聚器从现有的 CLIP 模型中传输具有丰富语义先验的知识。此外,为了确保概率空间一致性并抑制地标歧义,我们创造性地提出了面部地标密度图作为辅助监督信号,以指导生成帧地标分布学习。在广泛使用的基准数据集上进行广泛的实验,证明了我们的CPNet 在图像和音轨同步质量方面的优越性。此外,我们还进行了一系列研究,以消除个体关键组件的影响。

URL

https://arxiv.org/abs/2305.13962

PDF

https://arxiv.org/pdf/2305.13962.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot