Abstract

We propose a new method to refine the result of video annotation by exploiting the semantic and visual context of video. On one hand, semantic context mining is performed in a supervised way, using the manual concept labels of the training set. It is very useful for boosting video annotation performance, because semantic context is learned from labels given by people, indicating human intention. In this paper, we model the spatial and temporal context in video by using conditional random fields with different structures. Comparing with existing methods, our method could more accurately capture concept relationship in video and could more effectively improve the video annotation performance. On the other hand, visual context mining is performed in a semi-supervised way based on the visual similarities among video shots. It indicates the natural visual property of video, and could be considered as the compensation to semantic context, which generally could not be perfectly modeled. In this paper, we construct a graph based on the visual similarities among shots. Then a semi-supervised learning approach is adopt based on the graph to propagate probabilities of the reliable shots to others having similar visual features with them. Extensive experimental results on the widely used TRECVID datasets exhibit the effectiveness of our method for improving video annotation accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.