Abstract
Deep learning neural networks produce excellent results in various pattern recognition tasks. It is of great practical importance to answer some open questions regarding model design and parameterization, and to understand how input data are converted into meaningful knowledge at the output. The layer-by-layer evolution of the abstraction level has been proposed previously as a quantitative measure to describe the emergence of knowledge in the network. In this work we systematically evaluate the abstraction level for a variety of image datasets. We observe that there is a general tendency of increasing abstraction from input to output with the exception of a drop of abstraction at some ReLu and Pooling layers. The abstraction level is relatively low and does not change significantly in the first few layers following the input, while it fluctuates around some high saturation value at the layers preceding the output. Finally, the layer-by-layer change in abstraction is not normally distributed, rather it approximates an exponential distribution. These results point to salient local features of deep layers impacting overall (global) classification performance. We compare the results extracted from deep learning neural networks performing image processing tasks with the results obtained by analyzing brain imaging data. Our conclusions may be helpful in future designs of more efficient, compact deep learning neural networks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.