Abstract

The brain successfully performs visual object recognition with a limited number of hierarchical networks that are much shallower than artificial deep neural networks (DNNs) that perform similar tasks. Here, we show that long-range horizontal connections (LRCs), often observed in the visual cortex of mammalian species, enable such a cost-efficient visual object recognition in shallow neural networks. Using simulations of a model hierarchical network with convergent feedforward connections and LRCs, we found that the addition of LRCs to the shallow feedforward network significantly enhances the performance of networks for image classification, to a degree that is comparable to much deeper networks. We found that a combination of sparse LRCs and dense local connections dramatically increases performance per wiring cost. From network pruning with gradient-based optimization, we also confirmed that LRCs could emerge spontaneously by minimizing the total connection length while maintaining performance. Ablation of emerged LRCs led to a significant reduction of classification performance, which implies these LRCs are crucial for performing image classification. Taken together, our findings suggest a brain-inspired strategy for constructing a cost-efficient network architecture to implement parsimonious object recognition under physical constraints such as shallow hierarchical depth.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call