Abstract

In this study, we explore the efficiency of transfer learning in training convolutional neural networks (CNNs) for image classification tasks, focusing on a dataset with limited size. Utilizing the VGG16 model, we investigate the impact of varying the number of trainable layers on classification accuracy. Our specific classification challenge involves distinguishing between images of cars with open and closed hoods. The study demonstrates that employing transfer learning enables the pretrained VGG16 model to achieve over 97% accuracy in this binary classification task, even with a training set of just 1000 image samples. This research was conducted solely by the undersigned, who also represents the sole author of this paper.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call