Paper Reading AI Learner

3D Multi-frame Fusion for Video Stabilization

2024-04-19 13:43:14
Zhan Peng, Xinyi Ye, Weiyue Zhao, Tianqi Liu, Huiqiang Sun, Baopu Li, Zhiguo Cao

Abstract

In this paper, we present RStab, a novel framework for video stabilization that integrates 3D multi-frame fusion through volume rendering. Departing from conventional methods, we introduce a 3D multi-frame perspective to generate stabilized images, addressing the challenge of full-frame generation while preserving structure. The core of our approach lies in Stabilized Rendering (SR), a volume rendering module, which extends beyond the image fusion by incorporating feature fusion. The core of our RStab framework lies in Stabilized Rendering (SR), a volume rendering module, fusing multi-frame information in 3D space. Specifically, SR involves warping features and colors from multiple frames by projection, fusing them into descriptors to render the stabilized image. However, the precision of warped information depends on the projection accuracy, a factor significantly influenced by dynamic regions. In response, we introduce the Adaptive Ray Range (ARR) module to integrate depth priors, adaptively defining the sampling range for the projection process. Additionally, we propose Color Correction (CC) assisting geometric constraints with optical flow for accurate color aggregation. Thanks to the three modules, our RStab demonstrates superior performance compared with previous stabilizers in the field of view (FOV), image quality, and video stability across various datasets.

Abstract (translated)

在本文中,我们提出了RStab,一种新颖的视频稳定框架,它通过体积渲染实现了3D多帧融合。与传统方法不同,我们引入了一个3D多帧视角来生成稳定图像,解决了在保留结构的同时生成完整帧的挑战。我们方法的核心是基于稳定渲染(SR)的体积渲染模块,它超越了图像融合,通过引入特征融合实现了。我们的RStab框架的核心是基于稳定渲染(SR),一个体积渲染模块,将多帧信息融合在3D空间中。具体来说,SR包括通过投影扭曲多个帧的特征和颜色,将它们融合为描述符以渲染稳定图像。然而,扭曲信息的精度取决于投影精度,这是受动态区域影响的因素之一。为了应对这一挑战,我们引入了自适应范围(ARR)模块,将深度优先级集成到投影过程中,自适应地定义投影过程的采样范围。此外,我们还提出了使用光流引导的彩色校正(CC)辅助几何约束,以准确的颜色聚集。得益于这三个模块,我们的RStab在视场(FOV)、图像质量和视频稳定性方面与以前的视频稳定剂相比表现优异。

URL

https://arxiv.org/abs/2404.12887

PDF

https://arxiv.org/pdf/2404.12887.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot