Abstract

Deep learning is the key technology used in a large variety of applications such as self-driving cars, image recognition, automatic machine translation, automatic handwriting generation. The success was fueled due to accessibility of huge datasets, GPUs, max pooling. Earlier machine learning techniques employed two phases: features extraction and classification. The performance of such algorithms was highly dependent on how well the features are extracted and that was the major bottleneck of these techniques. Deep learning techniques employ Convolutional Neural Networks (CNNs) with numerous layers of non-linear processing for extracting the features automatically and classification that solves the previous problem. In the real time applications most of the time, either the dataset is unavailable or has less amount of data which makes it difficult to achieve accurate results for classifying the images. CNNs are hard to be trained using the small datasets. Transfer learning has emerged as a very powerful technique where in the knowledge gained from the larger dataset is transferred to the new dataset. Data augmentation and dropout are also powerful techniques that are useful for dealing with small datasets. In this paper, different techniques using the VGG16 pretrained model are compared on the small dataset.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.