Abstract

In the course of our daily lives, we rely on our ability to quickly recognize visual objects despite variations in the viewing angle or lighting conditions (1). Most of the time, this process occurs so naturally that it is easy to underestimate the computational challenge it poses. In fact, as much as one third of our cortex is involved in visual object recognition (2). Thus far, we have not been able to create artificial object recognition systems that can match the performance of the human visual system, especially when one takes into account the energetic constraints. However, understanding the principles of visual object recognition has major implications not only because of potential wide-ranging practical applications but also because these principles are likely to hold clues to how sensory systems work in general. After all, there is a growing body of experimental and computational evidence that similar principles (3) might be at work during visual, auditory, or olfactory object recognition tasks. In PNAS, Yamins et al. present an important advance in this research direction by demonstrating how to optimize artificial networks to match the discrimination ability of primates when it comes to categorizing objects (4). Importantly, optimized networks share many features with the corresponding networks in the brain. Thus, although the structure of neural networks in the brain could have been largely determined by the idiosyncrasies of the previous evolutionary trajectory, they seem to reflect instead a unique optimal solution. This in turn offers more support for theoretical and computational pursuits to find optimal circuit organization in the brain (5).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call