Abstract
We propose RainyScape, an unsupervised framework for reconstructing clean scenes from a collection of multi-view rainy images. RainyScape consists of two main modules: a neural rendering module and a rain-prediction module that incorporates a predictor network and a learnable latent embedding that captures the rain characteristics of the scene. Specifically, based on the spectral bias property of neural networks, we first optimize the neural rendering pipeline to obtain a low-frequency scene representation. Subsequently, we jointly optimize the two modules, driven by the proposed adaptive direction-sensitive gradient-based reconstruction loss, which encourages the network to distinguish between scene details and rain streaks, facilitating the propagation of gradients to the relevant components. Extensive experiments on both the classic neural radiance field and the recently proposed 3D Gaussian splatting demonstrate the superiority of our method in effectively eliminating rain streaks and rendering clean images, achieving state-of-the-art performance. The constructed high-quality dataset and source code will be publicly available.
Abstract (translated)
我们提出了RainyScape,一个无监督的框架,用于从一组多视角雨景图像中重构干净的场景。RainyScape由两个主要模块组成:一个神经渲染模块和一个融入预测网络和学习可塑嵌入的雨特性预测模块。具体来说,基于神经网络的离散余弦性质,我们首先优化神经渲染流程以获得低频场景表示。随后,我们通过基于所提出的自适应方向敏感梯度恢复损失共同优化两个模块,该损失鼓励网络区分场景细节和雨纹,从而促进梯度传播到相关组件。对经典神经辐射场和最近提出的3D高斯分裂进行的大量实验证明了我们方法在有效地消除雨纹和生成干净图像方面的优越性,达到了最先进的性能水平。构建的高质量数据集和源代码将公开可用。
URL
https://arxiv.org/abs/2404.11401