Abstract
In recent years, the rapid evolution of neural networks, exemplified by models such as VGG and GoogLeNet, has reshaped the landscape of artificial intelligence. This comparative exploration, focusing on transfer learning and attention mechanisms, not only reveals subtle performance distinctions but also provides crucial insights for refining these models in practical applications. This paper delves into the comparative analysis of two prominent neural network models, VGG and GoogLeNet, within the context of image classification. VGG, known for its deep structure, and GoogLeNet, incorporating innovative inception modules, are evaluated through experiments involving flower image datasets. The study includes transfer learning and explores the integration of attention mechanisms, specifically CBAM attention, into both models. Results indicate GoogLeNet's superior performance in terms of parameter efficiency, convergence speed, and overall accuracy. Furthermore, the addition of attention mechanisms enhances classification accuracy for both models. The paper concludes with insights into potential areas for further research, emphasizing optimal attention mechanism placement and a comparison with traditional methods in future studies.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have