Abstract

Abstract Social relationships link everyone in human society. Exploring social relationships in still images promotes researches of behaviors or characteristics among persons. Previous literature has discovered that face and body attributes can provide effective semantic information for social relationship recognition. However, they ignore that attributes contribute much differently to the recognition accuracy, and these multi-source attributes may contain redundancies and noises. This work aims to promote social relationship recognition accuracy by abstracting multi-source attribute features more efficiently. To this end, we propose a novel Deep Supervised Feature Selection (DSFS) framework to recognize social relationships in photos, which fuses the deep learning algorithm with l2,1-norm to learn a discriminative feature subset from multi-source features by leveraging the face and body attributes. Experimental results on PIPA-relation dataset qualitatively demonstrate the effectiveness of the proposed DSFS framework.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call