Abstract

Deep convolutional neural networks (CNNs) have demonstrated impressive performance on many visual tasks. Recently, they became useful models for the visual system in neuroscience. However, it is still not clear what is learned by CNNs in terms of neuronal circuits. When a deep CNN with many layers is used for the visual system, it is not easy to compare the structure components of CNNs with possible neuroscience underpinnings due to highly complex circuits from the retina to the higher visual cortex. Here, we address this issue by focusing on single retinal ganglion cells with biophysical models and recording data from animals. By training CNNs with white noise images to predict neuronal responses, we found that fine structures of the retinal receptive field can be revealed. Specifically, convolutional filters learned are resembling biological components of the retinal circuit. This suggests that a CNN learning from one single retinal cell reveals a minimal neural network carried out in this cell. Furthermore, when CNNs learned from different cells are transferred between cells, there is a diversity of transfer learning performance, which indicates that CNNs are cell specific. Moreover, when CNNs are transferred between different types of input images, here white noise versus natural images, transfer learning shows a good performance, which implies that CNNs indeed capture the full computational ability of a single retinal cell for different inputs. Taken together, these results suggest that CNNs could be used to reveal structure components of neuronal circuits, and provide a powerful model for neural system identification.

Highlights

  • D EEP convolutional neural networks (CNNs) have been a powerful model for numerous tasks related to system identification in recent years [1]

  • 3-layer 2-layer 1-layer stimulus and its average activation from the corresponding feature map in the first layer. By using both clearly defined biophysical model and real retinal data, we show that CNNs are interpretable when single retinal ganglion cells (RGCs) was modeled with the benefit to clarify what has been learned in the network structure components of CNNs

  • A variation of non-negative matrix factorization was used to analyze the RGC responses to white noise images and identify a number of subunits as bipolar cells of one RGC [30]. With this picture in mind, here we address the question of what types of network structure components can be revealed by CNN when it is used to model the single RGC response

Read more

Summary

Introduction

D EEP convolutional neural networks (CNNs) have been a powerful model for numerous tasks related to system identification in recent years [1]. By training a CNN with a large set of target images, it can achieve the human-level performance for visual object recognition. It is still a challenge for understanding the relationship between computation and underlying network structure of components. Qi Yan and Yajing Zheng contributed to this paper. Corresponding authors: Zhaofei Yu and Jian K. Liu. learned within CNNs [2], [3]. Visualizing, interpreting, and understanding CNNs are not trivial [4]

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call