Paper Reading AI Learner

Eye-tracked Virtual Reality: A Comprehensive Survey on Methods and Privacy Challenges

2023-05-23 14:02:38
Efe Bozkir, Süleyman Özdel, Mengdi Wang, Brendan David-John, Hong Gao, Kevin Butler, Eakta Jain, Enkelejda Kasneci

Abstract

Latest developments in computer hardware, sensor technologies, and artificial intelligence can make virtual reality (VR) and virtual spaces an important part of human everyday life. Eye tracking offers not only a hands-free way of interaction but also the possibility of a deeper understanding of human visual attention and cognitive processes in VR. Despite these possibilities, eye-tracking data also reveal privacy-sensitive attributes of users when it is combined with the information about the presented stimulus. To address these possibilities and potential privacy issues, in this survey, we first cover major works in eye tracking, VR, and privacy areas between the years 2012 and 2022. While eye tracking in the VR part covers the complete pipeline of eye-tracking methodology from pupil detection and gaze estimation to offline use and analyses, as for privacy and security, we focus on eye-based authentication as well as computational methods to preserve the privacy of individuals and their eye-tracking data in VR. Later, taking all into consideration, we draw three main directions for the research community by mainly focusing on privacy challenges. In summary, this survey provides an extensive literature review of the utmost possibilities with eye tracking in VR and the privacy implications of those possibilities.

Abstract (translated)

计算机硬件、传感器技术和人工智能的最新发展可以使得虚拟现实(VR)和虚拟空间成为人类日常生活中的一个重要部分。眼动追踪不仅提供了无触摸交互的方式,而且有可能更深入地理解VR中的人类视觉注意力和认知过程。尽管这些可能性,但眼动追踪数据在与呈现刺激信息结合时也会揭示用户的敏感隐私属性。为了解决这些可能性和潜在的隐私问题,本调查我们首先覆盖了2012年至2022年间眼动追踪、VR和隐私领域的主要工作。在VR部分,眼动追踪涵盖了从瞳孔检测和 gaze 估计到离线使用和分析的完整眼动追踪方法 pipeline。对于隐私和安全,我们重点探讨基于眼的认证以及计算方法,以保护个人及其在VR中的眼动追踪数据的隐私。后来,综合考虑所有因素,我们确定了研究社区的三个主要方向,主要关注隐私挑战。综上所述,本调查提供了在VR中眼动追踪的最大可能性以及这些可能性所涉及的隐私影响的全面文献综述。

URL

https://arxiv.org/abs/2305.14080

PDF

https://arxiv.org/pdf/2305.14080.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot