Abstract

Categorizing free-hand human sketches has profound implications in applications such as human computer interaction and image retrieval. The task is non-trivial due to the iconic nature of sketches, signified by large variances in both appearance and structure when compared with photographs. Despite recent advances made by deep learning methods, the requirement of a large training set is commonly imposed making them impractical for real-world applications where training sketches are cumbersome to obtain - sketches have to be hand-drawn one by one other than crawled freely on the Internet. In this work, we aim to delve further into the data scarcity problem of sketch-related research, by proposing a few-shot sketch classification framework. The model is based on a co-regularized embedding algorithm where common/shareable parts of learned human sketches are exploited, thereby can embed query sketch into a co-regularized sparse representation space for few-shot classification. A new dataset of 8,000 part-level annotated sketches of 100 categories is also proposed to facilitate future research. Experiment shows that our approach can achieve an 5-way one-shot classification accuracy of 85%, and 20-way one-shot at 51%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call