Abstract
We propose to use discriminative subgraphs to discover family photos from group photos in an efficient and effective way. Group photos are represented as face graphs by identifying social contexts such as age, gender, and face position. The previous work utilized bag-of-word models and considered frequent subgraphs from all group photos as features for classification. This approach, however, produces numerous subgraphs, resulting in high dimensions. Furthermore, some of them are not discriminative. To solve these issues, we adopt a state-of-the-art, frequent subgraph mining method that removes nondiscriminative subgraphs. We also use TF-IDF normalization, which is more suitable for the bag-ofword model. To validate our method, we experiment in two datasets. Our method shows consistently better performance, higher accuracy in lower feature dimensions, compared to the previous method. We also integrate our method with the recent Microsoft face recognition API and release it in a public website.
Highlights
Recent studies on image classification focus on object and scene classification
Once we identify the social context on group photos, we can use this information for various applications
To check the subgraph isomorphism, we look at the depth-first search (DFS) code of a subgraph, Gs, to see whether the code is equal to or bigger than ones generated by prior subgraphs
Summary
Recent studies on image classification focus on object and scene classification They show remarkable performance thanks to the improvement of image features such as convolutional neural network (CNN) [1]. Chen et al [2] proposed a method to categorize group photos into family and non-family types This method assumes that annotations about age, gender, and face position are well-estimated beforehand by using existing face detection and statistical estimation derived from the pixel context. They proposed to use a social-level feature named as Bag-of-Face-subGraph (BoFG) to represent group photos by graphs. API1 of Microsoft Project Oxford and released it at our demo site2 In this system (Fig. 2), users can test their own group images and see how well our method performs with them
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have