Abstract

With the development of Internet and Web 2.0, large-volume multimedia contents have been made available online. It is highly desired to provide easy accessibility to such contents, i.e., efficient and precise retrieval of images that satisfies users' needs. Toward this goal, content-based image retrieval (CBIR) has been intensively studied in the research community, while text-based search is better adopted in the industry. Both approaches have inherent disadvantages and limitations. Therefore, unlike the great success of text search, web image search engines are still premature. In this paper, we present iLike, a vertical image search engine that integrates both textual and features to improve retrieval performance. We bridge the semantic gap by capturing the meaning of each text term in the feature space, and reweight features according to their significance to the query terms. We also bridge the user intention gap because we are able to infer the visual meanings behind the textual queries. Last but not least, we provide a thesaurus, which is generated from the statistical similarity between the space representation of textual terms. Experimental results show that our approach improves both precision and recall, compared with content-based or text-based image retrieval techniques. More importantly, search results from iLike is more consistent with users' perception of the query terms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.