Abstract

Deep convolutional neural networks (DCNs) are increasingly being used with considerable success in image classification tasks trained over large datasets. However, such large datasets are not always available or affordable in many applications areas where we would like to apply DCNs, having only datasets of the order of a few thousands labelled images, acquired and annotated through lenghty and costly processes (such as in plant recognition, medical imaging, etc.). In such cases DCNs do not generally show competitive performance and one must resort to fine-tune networks that were costly pretrained with large generic datasets where there is no a-priori guarantee that they would work well in specialized domains. In this work we propose to train DCNs with a greedy layer-wise method, analogous to that used in unsupervised deep networks. We show how, for small datasets, this method outperforms DCNs which do not use pretrained models and results reported in the literature with other methods. Additionally, our method learns more interpretable and cleaner visual features. Our results are also competitive as compared with convolutional methods based on pretrained models when applied to general purpose datasets, and we obtain them with much smaller datasets (1.2 million vs. 10K images) at a fraction of the computational cost. We therefore consider this work a first milestone in our quest to successfully use DCNs for small specialized datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call