Abstract
Convolutional Neural Networks (CNNs) have become essential to solving image classification tasks. One of the most frequent models of CNNs for image classification is the Visual Geometry Group (VGG). The VGG architecture is made up of multiple layers of convolution and pooling processes followed by fully connected layers. Among the various VGG models, the VGG16 architecture has gained great attention due to its remarkable performance and simplicity. However, the VGG16 architecture is still prone to have many parameters contributing to its complexity. Moreover, the complexity of VGG16 may cause a longer execution time. The complexity of VGG16 architecture is also more highly prone to overfitting and may affect the classification accuracy. This study proposes an enhancement of VGG16 architecture to overcome such drawbacks. The enhancement involved the reduction of the convolution blocks, implementing batch normalization (B.N.) layers, and integrating global average pooling (GAP) layers with the addition of dense and dropout layers in the architecture. The experiment was carried out with six benchmark datasets for image classification tasks. The results from the experiment show that the network parameters are 79% less complex than the standard VGG16. The proposed model also yields better classification accuracy and shorter execution time. Reducing the parameters in the proposed improved VGG architecture allows for more efficient computation and memory usage. Overall, the proposed improved VGG architecture offers a promising solution to the challenges of long execution times and excessive memory usage in VGG16 architecture.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: JOIV : International Journal on Informatics Visualization
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.