Abstract
Over the past years, convolutional neural networks (CNNs) have achieved remarkable success in deep learning. The performance of CNN-based models has caused major advances in a wide range of tasks from computer vision to natural language processing. However, the exposition of the theoretical calculations behind the convolution operation is rarely emphasized. This study aims to provide better understanding the convolution operation entirely by means of diving into the theory of how backpropagation algorithm works for CNNs. In order to explain the training of CNNs clearly, the convolution operation on images is explained in detail and backpropagation in CNNs is highlighted. Besides, Labeled Faces in the Wild (LFW) dataset which is frequently used in face recognition applications is used to visualize what CNNs learn. The intermediate activations of a CNN trained on the LFW dataset are visualized to gain an insight about how CNNs perceive the world. Thus, the feature maps are interpreted visually as well, alongside the training process.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Intelligent Systems and Applications in Engineering
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.