Abstract

Abstract Face retrieval systems intend to locate index of identical faces to a given query face. The performance of these systems heavily relies on the careful analysis of different facial attributes (gender, race, etc.) since these attributes can tolerate some degrees of geometrical distortion, expressions, and occlusions. However, solely employing facial scores fail to add scalability. Besides, owing to the discriminative power of CNN (convolutional neural network) features, recent works have employed a complete set of deep transferred CNN features with a large dimensionality to obtain enhancement; yet these systems require high computational power and are very resource-demanding. This study aims to exploit the distinctive capability of semantic facial attributes while their retrieval results are refined by a proposed subset feature selection to reduce the dimensionality of the deep transferred descriptors. The constructed compact deeply transferred descriptors (CDTD) not only have a largely reduced dimension but also more discriminative power. Lastly, we have proposed a new performance metric which is tailored for the case of face retrieval called ImAP (Individual Mean Average Precision) and is used to evaluate retrieval results. Multiple experiments on two scalable face datasets demonstrated the superior performance of the proposed CDTD model outperforming state-of-the-art face retrieval results.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.