Abstract

When the training data is inadequate, it is difficult to train a deep Convolutional Neural Network (CNN) from scratch with randomized initial weights. Instead, it is common to train a source CNN model on a very large data set beforehand, and then use the learned source CNN model as an initialization to train a target CNN model. In deep learning realm, this procedure is called fine-tuning a CNN. This paper presents an experimental study on how to combine a collection of incrementally fine-tuned CNN models for cross-domain and multi-class object category recognition tasks. A group of fine-tuned CNN models is trained on the target data set by incrementally transferring parameters from a source CNN model trained on a large data set initially. The last two fully-connected (FC) layers of the source CNN model are eliminated, and two New FC layers are added to make the learned new CNN model suitable for the target task. Based on Caltech-101 and Office data sets, the experimental results demonstrate the effectiveness and good performance of the proposed methods. The proposed method is more suitable for the object recognition task when there is inadequate target training data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call