Abstract
Recently, concept correlation defining the relationship between concepts has been playing an important role in video annotation (or concept detection). To improve the annotation performance, this paper presents a two-view concept correlation based video annotation refinement, using data-specific spatial and temporal concept correlations. Specifically, instead of generic concept correlation within shots, the spatial view estimates a data-specific concept correlation for each shot, via introducing concept correlation bases to map low-level features to high-level concept distribution under the framework of sparse representation. On the other hand, beyond the temporal consistency of one concept, a richer temporal correlation between different concepts respectively locating in the current shot and its neighbors is utilized to adjust the detection scores. In the end, these two types of concept correlations are integrated into a probability calculation based framework to refine the initial results derived from multiple concept detectors. And the experiments conducted on TRECVID 2006-2008 datasets and comparison with existing works demonstrate its effectiveness.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.