Paper Reading AI Learner

The Sensorium competition on predicting large-scale mouse primary visual cortex activity

2022-06-17 10:09:57
Konstantin F. Willeke (1 and 2 and 3), Paul G. Fahey (4 and 5), Mohammad Bashiri (1 and 2 and 3), Laura Pede (3), Max F. Burg (1 and 2 and 3 and 6), Christoph Blessing (3), Santiago A. Cadena (1 and 3 and 6), Zhiwei Ding (4 and 5), Konstantin-Klemens Lurz (1 and 2 and 3), Kayla Ponder (4 and 5), Taliah Muhammad (4 and 5), Saumil S. Patel (4 and 5), Alexander S. Ecker (3 and 7), Andreas S. Tolias (4 and 5 and 8), Fabian H. Sinz (2 and 3 and 4 and 5) ((1) International Max Planck Research School for Intelligent Systems, University of Tuebingen, Germany, (2) Institute for Bioinformatics and Medical Informatics, University of Tuebingen, Germany (3) Institute of Computer Science and Campus Institute Data Science, University of Goettingen, Germany, (4) Department of Neuroscience, Baylor College of Medicine, Houston, USA, (5) Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA, (6) Institute for Theoretical Physics, University of Tuebingen, Germany, (7) Max Planck Institute for Dynamics and Self-Organization, Goettingen, Germany, (8) Electrical and Computer Engineering, Rice University, Houston, USA)

Abstract

The neural underpinning of the biological visual system is challenging to study experimentally, in particular as the neuronal activity becomes increasingly nonlinear with respect to visual input. Artificial neural networks (ANNs) can serve a variety of goals for improving our understanding of this complex system, not only serving as predictive digital twins of sensory cortex for novel hypothesis generation in silico, but also incorporating bio-inspired architectural motifs to progressively bridge the gap between biological and machine vision. The mouse has recently emerged as a popular model system to study visual information processing, but no standardized large-scale benchmark to identify state-of-the-art models of the mouse visual system has been established. To fill this gap, we propose the Sensorium benchmark competition. We collected a large-scale dataset from mouse primary visual cortex containing the responses of more than 28,000 neurons across seven mice stimulated with thousands of natural images, together with simultaneous behavioral measurements that include running speed, pupil dilation, and eye movements. The benchmark challenge will rank models based on predictive performance for neuronal responses on a held-out test set, and includes two tracks for model input limited to either stimulus only (Sensorium) or stimulus plus behavior (Sensorium+). We provide a starting kit to lower the barrier for entry, including tutorials, pre-trained baseline models, and APIs with one line commands for data loading and submission. We would like to see this as a starting point for regular challenges and data releases, and as a standard tool for measuring progress in large-scale neural system identification models of the mouse visual system and beyond.

Abstract (translated)

URL

https://arxiv.org/abs/2206.08666

PDF

https://arxiv.org/pdf/2206.08666.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot