Abstract

AbstractAccurate and efficient video classification demands the fusion of multimodal information and the use of intermediate representations. Combining the two ideas into one framework, we propose a series of probabilistic models for video representation and classification using intermediate semantic representations derived from multimodal features of video. On the basis of a class of bipartite undirected graphical models named harmonium, we propose dual‐wing harmonium (DWH) model that represents video shots as latent semantic topics derived by jointly modeling the transcript keywords and color‐histogram features of the data. Our family‐of‐harmonium (FoH) and hierarchical harmonium (HH) model extends DWH by introducing variables representing category labels of data, which allows data representation and classification to be performed in the same model. Our models are among the few attempts of using undirected graphical models for representing and classifying video data. Experiments on a benchmark video collection show different semantic interpretations of video data under our models, as well as superior classification performance in comparison with several directed models. Copyright © 2008 Wiley Periodicals, Inc., A Wiley Company Statistical Analy Data Mining 1: 000‐000, 2008

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.