Abstract

In this paper, we propose a new method to model the temporal context for boosting video annotation accuracy. The motivation of our idea mainly comes from the fact that temporally continuous shots in video are generally with relevant content, so that the performance of video annotation could be comparably boosted by mining the temporal dependency between shots in video. Based on this consideration, we propose a temporal context model to mine the redundant information between shots. By connecting our model with conditional random field and borrowing the learning and inference approaches from it, we could obtain the refined probability of a concept occurring in the shot, which is the leverage of temporal context information and initial output of video annotation. Comparing with existing methods for temporal context mining of video annotation, our model could capture different kinds of shot dependency more accurately to improve the video annotation performance. Furthermore, our model is relatively simple and efficient, which is important for the applications which have large scale data to process. Extensive experimental results on the widely used TRECVID datasets exhibit the effectiveness of our method for improving video annotation accuracy.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.