Abstract

Despite previous successes of sliding window-based object detection in images, searching desired events in the volumetric video space is still a challenging problem, partially because the pattern search in spatio-temporal video space is much more complicated than that in spatial image space. Without knowing the location, temporal duration, and the spatial scale of the event, the search space for video events is prohibitively large for exhaustive search.To reduce the search complexity, we propose a heuristic branch-and-bound solution for event detection in videos. Unlike existing branch-and-bound method which searches for an optimal subvolume before comparing its detection score against the threshold, we aim at directly finding subvolumes whose scores are higher than the threshold. In doing so, many unnecessary branches are terminated much earlier, thus the search speed can be much faster. To validate this approach, we select three human action classes from the KTH dataset for training while testing with our own action dataset which has clutter and moving backgrounds as well as large variations in lighting, scale, and performing speed of actions. The experiment results show that our technique dramatically reduces computational cost without significantly degrading the quality of the detection results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call