Paper Reading AI Learner

Towards Robust Event-guided Low-Light Image Enhancement: A Large-Scale Real-World Event-Image Dataset and Novel Approach

2024-04-01 00:18:17
Guoqiang Liang, Kanghao Chen, Hangyu Li, Yunfan Lu, Lin Wang

Abstract

Event camera has recently received much attention for low-light image enhancement (LIE) thanks to their distinct advantages, such as high dynamic range. However, current research is prohibitively restricted by the lack of large-scale, real-world, and spatial-temporally aligned event-image datasets. To this end, we propose a real-world (indoor and outdoor) dataset comprising over 30K pairs of images and events under both low and normal illumination conditions. To achieve this, we utilize a robotic arm that traces a consistent non-linear trajectory to curate the dataset with spatial alignment precision under 0.03mm. We then introduce a matching alignment strategy, rendering 90% of our dataset with errors less than 0.01s. Based on the dataset, we propose a novel event-guided LIE approach, called EvLight, towards robust performance in real-world low-light scenes. Specifically, we first design the multi-scale holistic fusion branch to extract holistic structural and textural information from both events and images. To ensure robustness against variations in the regional illumination and noise, we then introduce a Signal-to-Noise-Ratio (SNR)-guided regional feature selection to selectively fuse features of images from regions with high SNR and enhance those with low SNR by extracting regional structure information from events. Extensive experiments on our dataset and the synthetic SDSD dataset demonstrate our EvLight significantly surpasses the frame-based methods. Code and datasets are available at this https URL.

Abstract (translated)

事件相机因其出色的动态范围和高动态范围而最近受到了很多关注,用于低光图像增强(LIE)。然而,当前的研究由于缺乏大规模、真实世界和空间时间同步的事件图像数据集而受到限制。为此,我们提出了一个由超过30K对图像和事件组成的真实世界(室内和室外)数据集。为了实现这一目标,我们利用一个机器人臂,在低光和正常光照条件下,对数据集进行空间对齐精度为0.03mm的轨迹跟踪。然后,我们引入了一种匹配对齐策略,将数据集中的90%数据与错误小于0.01s的图像进行匹配。基于这个数据集,我们提出了一个新的事件指导的LIE方法,称为EvLight,以在现实世界的低光场景中实现稳健的性能。具体来说,我们首先设计了一个多尺度 holistic 融合分支,从事件和图像中提取整体结构和纹理信息。为了确保对区域照明和噪声的鲁棒性,我们然后引入了信号-噪声比(SNR)指导的局部特征选择,选择具有高SNR的区域特征并增强具有低SNR的区域特征,通过从事件中提取区域结构信息进行局部结构增强。对我们数据集和合成SDSD数据集的广泛实验证明,我们的EvLight明显超越了基于帧的方法。代码和数据集可在该链接处获取:https://www. thisurl.

URL

https://arxiv.org/abs/2404.00834

PDF

https://arxiv.org/pdf/2404.00834.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot