Abstract

This paper describes face image retrieval and annotation based on latent semantic indexing in FIARS. Two latent semantic spaces are constructed from visual and symbolic features to develop these mechanisms. These features are corresponding to lengths of some places of a face and its parts, and keywords, respectively. One latent semantic space is constructed from visual features, the other space is constructed from both features. The former space is used for retrieving similar face images to a given face image, and the latter is used for seeking keywords to the given face image. Moreover, two types of visual futures are defined. One is specified in terms of the lengths of face parts, and the other in terms of points on the outlines of a face and its parts. As an experiment, recall and precision ratios of retrieved face images are measured from the viewpoint of whether similar face images are retrieved, and the ratios of retrieved keywords are measured using two types of the visual features from the viewpoint of whether the retrieved keywords are suitable for the given face image. To evaluate efficiency, not only face images which are stored in the database of the system, but also new face images are given as queries.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.