Abstract

Because video is becoming more popular and constitutes a major part of data collection, we have the need to process video selection queries --- selecting videos that contain target objects. However, a naïve scan of a video corpus without optimization would be extremely inefficient due to applying complex detectors to irrelevant videos. This demo presents Paine; a video query system that employs a novel index mechanism to optimize video selection queries via commonsense knowledge. Paine samples video frames to build an inexpensive lossy index, then leverages probabilistic models based on existing commonsense knowledge sources to capture the semantic-level correlation among video frames, thereby allowing Paine to predict the content of unindexed video. These models can predict which videos are likely to satisfy selection predicates so as to avoid Paine from processing irrelevant videos. We will demonstrate a system prototype of Paine for accelerating the processing of video selection queries, allowing VLDB'23 participants to use the Paine interface to run queries. Users can compare Paine with the baseline, the SCAN method.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.