Abstract
Spike-based communication between biological neurons is sparse and unreliable. This enables the brain to process visual information from the eyes efficiently. Taking inspiration from biology, artificial spiking neural networks coupled with silicon retinas attempt to model these computations. Recent findings in machine learning allowed the derivation of a family of powerful synaptic plasticity rules approximating backpropagation for spiking networks. Are these rules capable of processing real-world visual sensory data? In this paper, we evaluate the performance of Event-Driven Random Backpropagation (eRBP) at learning representations from event streams provided by a Dynamic Vision Sensor (DVS). First, we show that eRBP matches state-of-the-art performance on DvsGesture with the addition of a simple covert attention mechanism. By remapping visual receptive fields relatively to the center of the motion, this attention mechanism provides translation invariance at low computational cost compared to convolutions. Second, we successfully integrate eRBP in a real robotic setup, where a robotic arm grasps objects with respect to detected visual affordances. In this setup, visual information is actively sensed by a DVS mounted on a robotic head performing microsaccadic eye movements. We show that our method quickly classifies affordances within 100ms after microsaccade onset, comparable to human performance reported in behavioral study. Our results suggest that advances in neuromorphic technology and plasticity rules enable the development of autonomous robots operating at high speed and low energy budget.
Abstract (translated)
生物神经元之间的尖峰通信是稀疏和不可靠的。这使大脑能够有效地处理来自眼睛的视觉信息。在生物学的启发下,人工Spiking神经网络和硅视网膜试图模拟这些计算。最近在机器学习方面的发现允许推导出一系列强有力的突触可塑性规则,近似于spiking网络的反向传播。这些规则是否能够处理真实的视觉感官数据?本文从动态视觉传感器(DVS)提供的事件流出发,对事件驱动随机反向传播(ERBP)在学习表示上的性能进行了评价。首先,我们证明了erbp与dvsgesture上最先进的性能相匹配,并添加了一个简单的隐蔽注意力机制。通过相对运动中心重新映射视觉接收场,这种注意力机制提供了与卷积相比低计算成本的翻译不变性。第二,我们成功地将erbp集成到一个真正的机器人设置中,在这个设置中,机器人手臂根据检测到的视觉效果来抓取对象。在这个设置中,视觉信息是由安装在机器人头部上的DVS主动感知的,它执行微偶然的眼球运动。我们证明,我们的方法可以在微事故发生后100毫秒内快速对供给进行分类,与行为研究中报告的人类表现相当。我们的研究结果表明,在神经形态技术和可塑性规则的进步,使自主机器人的发展,在高速和低能源预算。
URL
https://arxiv.org/abs/1904.04805