Paper Reading AI Learner

Attention! Dynamic Epistemic Logic Models of attentive Agents

2023-03-23 17:55:32
Gaia Belardinelli, Thomas Bolander

Abstract

Attention is the crucial cognitive ability that limits and selects what information we observe. Previous work by Bolander et al. (2016) proposes a model of attention based on dynamic epistemic logic (DEL) where agents are either fully attentive or not attentive at all. While introducing the realistic feature that inattentive agents believe nothing happens, the model does not represent the most essential aspect of attention: its selectivity. Here, we propose a generalization that allows for paying attention to subsets of atomic formulas. We introduce the corresponding logic for propositional attention, and show its axiomatization to be sound and complete. We then extend the framework to account for inattentive agents that, instead of assuming nothing happens, may default to a specific truth-value of what they failed to attend to (a sort of prior concerning the unattended atoms). This feature allows for a more cognitively plausible representation of the inattentional blindness phenomenon, where agents end up with false beliefs due to their failure to attend to conspicuous but unexpected events. Both versions of the model define attention-based learning through appropriate DEL event models based on a few and clear edge principles. While the size of such event models grow exponentially both with the number of agents and the number of atoms, we introduce a new logical language for describing event models syntactically and show that using this language our event models can be represented linearly in the number of agents and atoms. Furthermore, representing our event models using this language is achieved by a straightforward formalisation of the aforementioned edge principles.

Abstract (translated)

注意力是一个重要的认知能力,它限制并选择我们观察到的信息。Bolander等人(2016)之前的研究表明,我们可以基于动态知识逻辑(DEL)建立一个注意力模型,其中参与者可以是全注意力或完全没有注意力。虽然引入了真实的特征,即缺乏注意力的参与者认为什么也没有发生,但模型并没有表现出注意力最本质的特征:选择性。在这里,我们提出了一种扩展,可以关注原子公式的子集。我们介绍了命题注意力对应的逻辑,并证明了其axiomatization是 sound和完整的。然后我们扩展了框架,以处理缺乏注意力的参与者,他们可能会默认关注他们未关注的特定真相值(类似于关注原子的前置知识)。这一特性可以更容易地模拟缺乏注意力的忽视现象,即因为未能关注而出现错误的信念。该模型的两个版本通过适当的DEL事件模型定义了注意力基于学习,这些事件模型基于几个清晰的边缘原则。虽然这些事件模型的大小随着参与者数量和原子数量呈指数增长,但我们引入了一个新的逻辑语言,以描述事件模型的结构,并证明了使用这个语言,我们可以线性地表示参与者和原子的数量。此外,使用这个语言表示我们的事件模型是通过上述边缘原则的简单形式化实现的。

URL

https://arxiv.org/abs/2303.13494

PDF

https://arxiv.org/pdf/2303.13494.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot