Paper Reading AI Learner

Spyker: High-performance Library for Spiking Deep Neural Networks

2023-01-31 14:25:03
Shahriar Rezghi Shirsavar, Mohammad-Reza A. Dehaqani

Abstract

Spiking neural networks (SNNs) have been recently brought to light due to their promising capabilities. SNNs simulate the brain with higher biological plausibility compared to previous generations of neural networks. Learning with fewer samples and consuming less power are among the key features of these networks. However, the theoretical advantages of SNNs have not been seen in practice due to the slowness of simulation tools and the impracticality of the proposed network structures. In this work, we implement a high-performance library named Spyker using C++/CUDA from scratch that outperforms its predecessor. Several SNNs are implemented in this work with different learning rules (spike-timing-dependent plasticity and reinforcement learning) using Spyker that achieve significantly better runtimes, to prove the practicality of the library in the simulation of large-scale networks. To our knowledge, no such tools have been developed to simulate large-scale spiking neural networks with high performance using a modular structure. Furthermore, a comparison of the represented stimuli extracted from Spyker to recorded electrophysiology data is performed to demonstrate the applicability of SNNs in describing the underlying neural mechanisms of the brain functions. The aim of this library is to take a significant step toward uncovering the true potential of the brain computations using SNNs.

Abstract (translated)

Spiking neural networks (SNNs) 因为他们的潜力而最近被揭示。与前几代神经网络相比,SNNs 更能够模拟大脑。SNNs 使用更少样本和学习,消耗更少的能量,是这些网络的关键特征之一。然而,SNNs 的理论优势在实践中没有体现,因为模拟工具的速度太慢,以及所提出的网络结构不切实际。在本研究中,我们从头开始使用C++/CUDA编写名为Spyker 的高性能库,它比其前任表现得更好。在本研究中,我们使用Spyker 实现了多个SNNs,采用不同的学习规则( spike-timing-dependent plasticity 和 reinforcement learning),这些SNNs 实现了显著更好的运行时间,以证明库在大规模网络模拟中的实用性。据我们所知,目前还没有开发用于模拟大规模Spiking neural networks 并以高性能模块化实现的库。此外,我们进行了比较 Spyker 提取的代表刺激与记录电生理学数据的录制数据,以证明SNNs 在描述大脑功能基本神经网络机制方面的适用性。本库的目标是迈出一个重要的步骤,揭示使用SNNs 进行大脑计算的真正潜力。

URL

https://arxiv.org/abs/2301.13659

PDF

https://arxiv.org/pdf/2301.13659.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot