Abstract
Recurrent Neural Networks (RNNs) are frequently used to model aspects of brain function and structure. In this work, we trained small fully-connected RNNs to perform temporal and flow control tasks with time-varying stimuli. Our results show that different RNNs can solve the same task by converging to different underlying dynamics and that the performance gracefully degrades either as network size is decreased, interval duration is increased, or connectivity is damaged. Our results are useful to quantify different aspects of the models, which are normally used as black boxes and need to be understood in advance to modeling the biological response of cerebral cortex areas.
Abstract (translated)
URL
https://arxiv.org/abs/1906.01094