Abstract
In this paper, we present RStab, a novel framework for video stabilization that integrates 3D multi-frame fusion through volume rendering. Departing from conventional methods, we introduce a 3D multi-frame perspective to generate stabilized images, addressing the challenge of full-frame generation while preserving structure. The core of our approach lies in Stabilized Rendering (SR), a volume rendering module, which extends beyond the image fusion by incorporating feature fusion. The core of our RStab framework lies in Stabilized Rendering (SR), a volume rendering module, fusing multi-frame information in 3D space. Specifically, SR involves warping features and colors from multiple frames by projection, fusing them into descriptors to render the stabilized image. However, the precision of warped information depends on the projection accuracy, a factor significantly influenced by dynamic regions. In response, we introduce the Adaptive Ray Range (ARR) module to integrate depth priors, adaptively defining the sampling range for the projection process. Additionally, we propose Color Correction (CC) assisting geometric constraints with optical flow for accurate color aggregation. Thanks to the three modules, our RStab demonstrates superior performance compared with previous stabilizers in the field of view (FOV), image quality, and video stability across various datasets.
Abstract (translated)
在本文中,我们提出了RStab,一种新颖的视频稳定框架,它通过体积渲染实现了3D多帧融合。与传统方法不同,我们引入了一个3D多帧视角来生成稳定图像,解决了在保留结构的同时生成完整帧的挑战。我们方法的核心是基于稳定渲染(SR)的体积渲染模块,它超越了图像融合,通过引入特征融合实现了。我们的RStab框架的核心是基于稳定渲染(SR),一个体积渲染模块,将多帧信息融合在3D空间中。具体来说,SR包括通过投影扭曲多个帧的特征和颜色,将它们融合为描述符以渲染稳定图像。然而,扭曲信息的精度取决于投影精度,这是受动态区域影响的因素之一。为了应对这一挑战,我们引入了自适应范围(ARR)模块,将深度优先级集成到投影过程中,自适应地定义投影过程的采样范围。此外,我们还提出了使用光流引导的彩色校正(CC)辅助几何约束,以准确的颜色聚集。得益于这三个模块,我们的RStab在视场(FOV)、图像质量和视频稳定性方面与以前的视频稳定剂相比表现优异。
URL
https://arxiv.org/abs/2404.12887