Abstract

Today data is not only constrained to text format; it has been expanded to multi-media data as well. As compared to audio and images, video data needs an attention, due to ever increasing size of videos and their massive storage size. Size of video data is increasing due to introduction of numerous video applications. The video data mining faces key challenges like structuring of video data, for proper retrieval, extraction of low-level features, which includes color, texture and shape. In our method, the structuring of videos is done on basis of two low-level contents, which includes RGB Histogram and Edges based method. These two contents detect proper shot boundaries i.e., where shot has been ended or started. Shots are counted and classification in between video types is done. Video classification will lead to control Copyright Infringement and False Tagging. We have proposed a methodology to combine detection methods of action and dialogue scenes and performed classification. Our proposed methodology achieves an accuracy of 86%, as compared to State-of-Art methods of action and dialogue scene detection, which is 78% and 95% respectively. Today data is not only constrained to text format; it has been expanded to multi-media data as well. As compared to audio and images, video data needs an attention, due to ever increasing size of videos and their massive storage size. Size of video data is increasing due to introduction of numerous video applications. The video data mining faces key challenges like structuring of video data, for proper retrieval, extraction of low-level features, which includes color, texture and shape. In our method, the structuring of videos is done on basis of two low-level contents, which includes RGB Histogram and Edges based method. These two contents detect proper shot boundaries i.e., where shot has been ended or started. Shots are counted and classification in between video types is done. Video classification will lead to control Copyright Infringement and False Tagging. We have proposed a methodology to combine detection methods of action and dialogue scenes and performed classification. Our proposed methodology achieves an accuracy of 86%, as compared to State-of-Art methods of action and dialogue scene detection, which is 78% and 95% respectively.text

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call