Abstract

In the era of big data, a traditional offline setting to processing image data is simply not tenable. We simply do not have the computational power to process every image with every possible tag; moreover, we will not have the manpower to clean up the potentially noisy results. In this paper, we introduce a query-driven approach to visual tagging, focusing on the application of face tagging and clustering. We integrate active learning with query-driven probabilistic databases. Rather than asking a user to provide manual labels so as to minimize the uncertainty of labels (face tags) across the entire data set, we ask the user to provide labels that minimize the uncertainty of his/her query result (e.g., "How many times did Bob and Jim appear together?"). We use a data-driven Gaussian process model of facial appearance to write the probabilistic estimates of facial identity into a probabilistic database, which can then support inference through query answering. Importantly, the database is augmented with contextual constraints (faces in the same image cannot be the same identity, while faces in the same track must be identical). Experiments on the real-world photo collections demonstrate the effectiveness of the proposed method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.