Abstract
Existing homography and optical flow methods are erroneous in challenging scenes, such as fog, rain, night, and snow because the basic assumptions such as brightness and gradient constancy are broken. To address this issue, we present an unsupervised learning approach that fuses gyroscope into homography and optical flow learning. Specifically, we first convert gyroscope readings into motion fields named gyro field. Second, we design a self-guided fusion module (SGF) to fuse the background motion extracted from the gyro field with the optical flow and guide the network to focus on motion details. Meanwhile, we propose a homography decoder module (HD) to combine gyro field and intermediate results of SGF to produce the homography. To the best of our knowledge, this is the first deep learning framework that fuses gyroscope data and image content for both deep homography and optical flow learning. To validate our method, we propose a new dataset that covers regular and challenging scenes. Experiments show that our method outperforms the state-of-the-art methods in both regular and challenging scenes.
Abstract (translated)
现有的共形方法和光学流方法在挑战性场景中出现错误,如雾、雨、夜晚和雪,因为这些基本假设,如亮度和梯度一致性被破坏了。为了解决这个问题,我们提出了一种 unsupervised 学习方法,将陀螺仪数据融合到共形和光学流学习中。具体而言,我们首先将陀螺仪读数转换为称为陀螺场的运动域。其次,我们设计了一个自主融合模块(SGF)来将陀螺场中从陀螺场中提取的背景运动与光学流融合,并指导网络关注运动细节。同时,我们提出了共形解码模块(HD)来将陀螺场和 SGF 中的中间结果相结合,生成共形。据我们所知,这是第一个将陀螺仪数据和图像内容同时用于深度共形和光学流学习的深度学习框架。为了验证我们的方法,我们提出了一个涵盖常规和挑战性场景的新数据集。实验表明,我们在常规和挑战性场景中的新方法都比现有方法表现更好。
URL
https://arxiv.org/abs/2301.10018