Abstract
Layer-by-layer training is an alternative end-to-end backpropagation approach for training deep convolutional neural networks. Layer-by-layer training on specific architectures can yield very competitive results. In the ImageNet database (www.imagenet. org), layer-by-layer trained networks can perform comparable to many modern end-to-end trained networks. This article compares the performance gap between the two training procedures over a wide range of network architectures and further analyzes the potential limitations of layer-by-layer training. The results show that layer-by-layer learning quickly saturates after a certain critical level due to overfitting of early levels in neural networks. Several approaches that have been used to solve this problem are discussed, and a methodology for improving layer-by-layer learning in various neural network architectures is discussed. Fundamentally, this research highlights the need to open up the black box that represents modern deep neural networks and explore the layer-by-layer interactions between intermediate hidden layers within deep networks through the lens of layer-by-layer learning.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.