Paper Reading AI Learner

Modeling User Preferences via Brain-Computer Interfacing

2024-05-15 20:41:46
Luis A. Leiva, Javier Ttraver, Alexandra Kawala-Sterniuk, Tuukka Ruotsalo

Abstract

Present Brain-Computer Interfacing (BCI) technology allows inference and detection of cognitive and affective states, but fairly little has been done to study scenarios in which such information can facilitate new applications that rely on modeling human cognition. One state that can be quantified from various physiological signals is attention. Estimates of human attention can be used to reveal preferences and novel dimensions of user experience. Previous approaches have tackled these incredibly challenging tasks using a variety of behavioral signals, from dwell-time to click-through data, and computational models of visual correspondence to these behavioral signals. However, behavioral signals are only rough estimations of the real underlying attention and affective preferences of the users. Indeed, users may attend to some content simply because it is salient, but not because it is really interesting, or simply because it is outrageous. With this paper, we put forward a research agenda and example work using BCI to infer users' preferences, their attentional correlates towards visual content, and their associations with affective experience. Subsequently, we link these to relevant applications, such as information retrieval, personalized steering of generative models, and crowdsourcing population estimates of affective experiences.

Abstract (translated)

当前的脑机接口(BCI)技术允许推断和检测认知和情感状态,但相当少的工作已经致力于研究这种信息如何促进新的应用,这些应用依赖于对人类认知的建模。一个可以从各种生理信号进行测量的状态是注意力。人类注意力的估计可用于揭示用户的偏好和新的用户体验维度。之前的方法已经利用各种行为信号(从驻留时间到点击通过数据)和视觉对应关系的计算模型来解决这些极其困难的问题。然而,行为信号只是对用户真实关注和情感偏好的粗略估计。事实上,用户可能只关注某些内容,因为它们引人注目,而不是因为它们真正有趣,或者只是因为它们很离奇。在本文中,我们提出了使用BCI推断用户偏好、他们对视觉内容的注意相关性和情感经验之间的关联的研究议程和示例工作。随后,我们将这些与相关应用联系起来,如信息检索、个性化的生成模型驱动和民办公众情绪体验的估算。

URL

https://arxiv.org/abs/2405.09691

PDF

https://arxiv.org/pdf/2405.09691.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot