Abstract
After evaluating the difficulty of CNNs in extracting convolution features, this paper suggested an improved convolutional neural network (CNN) method (ICNN-BNDOA), which is based on Batch Normalization (BN), Dropout (DO), and Adaptive Moment Estimation (Adam) optimizer. To circumvent the gradient challenge and quicken convergence, the ICNN-BNDOA uses a sequential CNN structure with the Leaky rectified linear unit (LeakyReLU) as the activation function (AF). The approach employs an Adam optimizer to handle the overfitting problem, which is done by introducing BN and DO layers to the entire connected CNN layers and the output layers, respectively, to decrease cross-entropy. Through a small regularization impact, BN was utilized to substantially speed up the training process of a neural network, as well as to increase the model's performance. The performance of the proposed system with conventional CNN (CCNN) was studied using the CIFAR-10 datasets as the benchmark data, and it was discovered that the suggested method demonstrated high recognition performance with the addition of BN and DO layers. CCNN and ICNN-BNDOA performance were compared. The statistical results showed that the proposed ICNN-BNDOA outperformed the CCNN with a training and testing accuracy of 0.6904 and 0.6861 respectively. It also outperformed with training and testing loss of 0.8910 and 0.9136 respectively.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.