Paper Reading AI Learner

ROSE: A Neurocomputational Architecture for Syntax

2023-03-15 18:44:37
Elliot Murphy

Abstract

A comprehensive model of natural language processing in the brain must accommodate four components: representations, operations, structures and encoding. It further requires a principled account of how these components mechanistically, and causally, relate to each another. While previous models have isolated regions of interest for structure-building and lexical access, many gaps remain with respect to bridging distinct scales of neural complexity. By expanding existing accounts of how neural oscillations can index various linguistic processes, this article proposes a neurocomputational architecture for syntax, termed the ROSE model (Representation, Operation, Structure, Encoding). Under ROSE, the basic data structures of syntax are atomic features, types of mental representations (R), and are coded at the single-unit and ensemble level. Elementary computations (O) that transform these units into manipulable objects accessible to subsequent structure-building levels are coded via high frequency gamma activity. Low frequency synchronization and cross-frequency coupling code for recursive categorial inferences (S). Distinct forms of low frequency coupling and phase-amplitude coupling (delta-theta coupling via pSTS-IFG; theta-gamma coupling via IFG to conceptual hubs) then encode these structures onto distinct workspaces (E). Causally connecting R to O is spike-phase/LFP coupling; connecting O to S is phase-amplitude coupling; connecting S to E is a system of frontotemporal traveling oscillations; connecting E to lower levels is low-frequency phase resetting of spike-LFP coupling. ROSE is reliant on neurophysiologically plausible mechanisms, is supported at all four levels by a range of recent empirical research, and provides an anatomically precise and falsifiable grounding for the basic property of natural language syntax: hierarchical, recursive structure-building.

Abstract (translated)

大脑自然语言处理的全面模型必须适应四个组件:表示、操作、结构和编码。它还要求一个原则性地描述这些组件如何机械性、因果性地相互关联。尽管以前的模型已将结构构建和词汇获取感兴趣的区域隔离开来,但在跨越不同神经复杂性级别的许多差距方面,仍然存在。通过扩展现有关于神经振荡如何索引各种语言过程的描述,本文提出了一个神经计算架构,称为rose模型(表示、操作、结构和编码)。在rose模型中,语法的基本数据结构是原子特征、心理表示类型(R),并使用单个单元和集体水平编码。基本计算(O)将这些单元转换为可操作的对象,使后续结构构建级别能够访问,是通过高频率高尔基体活动编码的。低频率同步和跨频率耦合编码了递归元组 inference(S)。不同形式的低频率耦合和相位-幅度耦合(通过pSTS-IFG的delta-theta耦合;通过IFG的theta-gamma耦合,以概念中心群为中介)将这些结构编码为不同的工作空间(E)。将r与o因果关系地连接起来是 spike-phase/LFP耦合;将o与s连接起来是相位-幅度耦合;将s与E连接起来是前脑-后脑旅行振荡系统;将E与较低水平连接的是低频率相位 reset的 spike-LFP耦合。rose依赖于神经生理学合理机制,在所有四个水平上都得到了各种最近的实证研究的支持,并为自然语言语法的基本特性提供了解剖学精确且可验证的基础:Hierarchical和递归的结构构建。

URL

https://arxiv.org/abs/2303.08877

PDF

https://arxiv.org/pdf/2303.08877.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot