Abstract
Biological nervous systems consist of networks of diverse, sophisticated information processors in the form of neurons of different classes. In most artificial neural networks (ANNs), neural computation is abstracted to an activation function that is usually shared between all neurons within a layer or even the whole network; training of ANNs focuses on synaptic optimization. In this paper, we propose the optimization of neuro-centric parameters to attain a set of diverse neurons that can perform complex computations. Demonstrating the promise of the approach, we show that evolving neural parameters alone allows agents to solve various reinforcement learning tasks without optimizing any synaptic weights. While not aiming to be an accurate biological model, parameterizing neurons to a larger degree than the current common practice, allows us to ask questions about the computational abilities afforded by neural diversity in random neural networks. The presented results open up interesting future research directions, such as combining evolved neural diversity with activity-dependent plasticity.
Abstract (translated)
生物神经系统由不同类别的神经元组成的复杂信息处理网络。在大多数人工神经网络(ANNs)中,神经元的计算被抽象为激活函数,通常在一个层或整个网络中的所有神经元共享;ANN的训练重点是连接优化。在本文中,我们提出了优化神经中心参数以实现一组具有复杂计算能力的多样化神经元的建议。演示了该方法的潜力,我们表明,仅通过进化神经参数,代理可以在各种强化学习任务中解决问题,而无需优化任何连接权重。虽然并不旨在成为准确的生物模型,但将神经元参数化程度远远超过当前的常见做法,可以让我们问有关随机神经网络中神经元多样性的计算能力的问题。呈现的结果开创了有趣的未来研究方向,例如将进化的神经多样性与活动依赖可塑性相结合。
URL
https://arxiv.org/abs/2305.15945