Abstract

Image-based object recognition is a well-studied topic in the field of computer vision. Features extraction for hand-drawn sketch recognition and retrieval become increasingly popular among the computer vision researchers. Increasing use of touchscreens and portable devices raised the challenge for computer vision community to access the sketches more efficiently and effectively. In this article, a novel deep convolutional neural network-based (DCNN) framework for hand-drawn sketch recognition, which is composed of three well-known pre-trained DCNN architectures in the context of transfer learning with global average pooling (GAP) strategy is proposed. First, an augmented-variants of natural images was generated and sum-up with TU-Berlin sketch images to all its corresponding 250 sketch object categories. Second, the features maps were extracted by three asymmetry DCNN architectures namely, Visual Geometric Group Network (VGGNet), Residual Networks (ResNet) and Inception-v3 from input images. Finally, the distinct features maps were concatenated and the features reductions were carried out under GAP layer. The resulting feature vector was fed into the softmax classifier for sketch classification results. The performance of proposed framework is comprehensively evaluated on augmented-variants TU-Berlin sketch dataset for sketch classification and retrieval task. Experimental outcomes reveal that the proposed framework brings substantial improvements over the state-of-the-art methods for sketch classification and retrieval.

Highlights

  • In a human point of view, sketch analysis is considering a fundamental problem, but it has a prominent role in the field of human-computer interaction (HCI)

  • A novel and efficient CNN-based framework for handdrawn sketch recognition is proposed that exploits the strength of extracted features from the various pretrained deep convolutional neural network-based (DCNN) via transfer learning with the utilization of global average pool (GAP) concept

  • A deep CNN-based framework for sketch recognition via transfer learning with global average pooling strategy was proposed

Read more

Summary

INTRODUCTION

In a human point of view, sketch analysis is considering a fundamental problem, but it has a prominent role in the field of human-computer interaction (HCI). To overcome the existing deficiencies in the sketch recognition system and following the emerging trend of exploring deep learning for features extraction via transfer learning approach, we proposed three different wellknown robust DCNNs architectures in the state-of-the-art visual recognition to the task for sketch recognition. A novel and efficient CNN-based framework for handdrawn sketch recognition is proposed that exploits the strength of extracted features from the various pretrained DCNNs via transfer learning with the utilization of global average pool (GAP) concept.

RELATED WORK
Handcrafted Features
Deep Features
PROPOSED METHOD
Data Preparation and Augmentation-Variants
Pre-Trained CNNs Architectures
Transfer Learning
Global Average Pooling
Dataset
Natural Images for Augmented-Variants
Environment Setting
Results and Evaluation
METHOD
Methods
Further Evaluation for Retrieval Task
Experimental Analysis
Findings
CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call