Recurrent networks can recycle neural resources to flexibly trade speed for accuracy in visual recognition
[S.l. : s.n.]
In2019 Conference on Cognitive Computational Neuroscience, pp. 739-742
Conference on Cognitive Computational Neuroscience (Berlin, Germany, 13-16 September 2019)
Article in monograph or in proceedings
Display more detailsDisplay less details
SW OZ DCC AI
2019 Conference on Cognitive Computational Neuroscience
SubjectCognitive artificial intelligence
Deep feedforward neural network models of vision dominate in both computational neuroscience and engineering. However, the primate visual system contains abundant recurrent connections. Recurrent signal flow enables recycling of limited computational resources over time, and so might boost the performance of a physically finite brain. In particular, recurrence could improve performance in vision tasks. Here we find that recurrent convolutional networks outperform feedforward convolutional networks matched in their number of parameters in large-scale visual recognition tasks. Moreover, recurrent networks can trade off accuracy for speed, balancing the cost of error against the cost of a delayed response (and the cost of greater energy consumption). We terminate recurrent computation once the output probability distribution has concentrated beyond a predefined entropy threshold. Trained by backpropagation through time, recurrent convolutional networks resemble the primate visual system in terms of their speed-accuracy trade-off behaviour. These results suggest that recurrent models are preferable to feedforward models of vision, both in terms of their performance at vision tasks and their ability to explain biological vision.
Upload full text
Use your RU credentials (u/z-number and password) tolog in with SURFconextto upload a file for processing by the repository team.