Abstract

Big structure of deep Convolutional Neural Networks (CNN) has staggeringly impressive improvement in the Imagenet Large Scale Visual Recognition Challenge (ILSVRC) 2012 and 2013. But only tens of classes are required to be trained in the most real applications. After the deep CNNs are trained in the ILSVRC dataset, efficiently transferring the big and deep structure to a new dataset is a tough problem. In this paper, three algorithms are proposed to implement the transfer, namely fine-tunning of the big structure, normalized Google distance and Wordnet lexical semantic similarity. After experiments are conducted in the Huawei accurate and fast Mobile Video Annotation Challenge (MoVAC) dataset, the fine-tuning algorithm has achieved the best performance in the accuracy and training time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call