Abstract

As a specific case of image recognition, zero-shot image classification is difficult to solve since its training set cannot cover all the categories of the testing set. From the view point of human vision recognition, the objects can be recognized through the visible and nameable description to the properties. To be the semantic description of the object property, attributes can be taken as a bridge between the seen and unseen categories, which are capable of using into zero-shot image classification. There are mainly binary attributes and relative attributes for zero-shot classification, where the relative attributes have the ability to catch more general sematic relationship than the binary ones. But relative attributes do not always work in zero-shot classification for those categories having similar relative strength attributes. Aiming at solving the defect of the relative attributes in describing the similar categories, we propose to construct the Hybrid Relative Attributes based on Sparse Coding (SC-HRA). First, sparse coding is implemented on low-level features to get nonsemantic relative attributes, which are the necessary complement to the existing relative attributes. After that, they are integrated with the relative attributes to form the hybrid relative attributes (HRA). HRA ranking functions are then learned by the relative attribute learning. Finally, the class label is obtained according to the predicted ranking results of HRA and the ranking relations of HRA among the categories. To verify the effectiveness of SC-HRA, the extensive experiments are conducted on the datasets of faces and natural scenes. The results show that SC-HRA acquires the higher classification accuracy and AUC value.

Highlights

  • Image recognition has attracted the attention of many researchers and made a lot of progress in recent years

  • Aiming at solving the defect of the relative attributes in describing the similar categories, we propose to construct the Hybrid Relative Attributes based on Sparse Coding (SC-hybrid relative attributes (HRA))

  • Public Figure Face (Pub Fig) dataset consist of 772 images with 11 semantic attributes from 8 identities (Alex (A), Clive (C), Hugh (H), Jared (J), Miley (M), Scarlett (S), Viggo (V), and Zac (Z)), which is 512-dimensional gist descriptor and 30-dimensional global color features extracted from each image [32]. 560 images from Pub Fig dataset are selected as the testing set

Read more

Summary

Introduction

Image recognition has attracted the attention of many researchers and made a lot of progress in recent years. To solve zero-shot classification problems, traditional image recognition approaches build the relations between low-level features and object category labels. The range of relative attribute is (−∞, +∞), while the absolute value of it has no meaning because of its comparability between different categories In such a way, relative attribute has many visual applications to help visual tasks understand the image contents as humans prefer to understand objects in the comparative way. In relative attributes-based zero-shot classification, the relative strengths of the attributes are considered, which comparatively describe the samples in attributes like “greater than”, “less than”, or “equal to” By this way, the sample has distinguishable location in the attribute space, and the class label of the sample can be further determined.

Related Works
SC-HRA Model Based Zero-Shot Image Classification
Ranking Function Learning Based on Hybrid Relative
Experiments
Findings
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.