Abstract
The segmentation into scenes helps users to browse movie archives and to select the interesting ones. In a given movie, we have two kinds of scenes: action scenes and non-action scenes. To detect action scenes, we rely on tempo features as motion and audio energy. However, to detect non-action scenes, we have to use the content information. In this paper, we present a new approach to detect non-action movie scenes. The main idea is the use of a new dynamic variant of the self-organizing maps called MIGSOM (Multilevel Interior Growing self-organizing maps) to detect agglomerations of shots in movie scenes. The originality of MIGSOM model lies in its architecture for evolving the structure of the network. The MIGSOM algorithm is generated by a growth process by adding nodes where it is necessary, whether from the boundaries or the interior of the map. In addition, the advantage of the proposed MIGSOM algorithm is their ability to find the best structure of the output space through the training process and to represent better the semantics of the data. Our system is tested on a varied database and compared to the classical SOM and others works. The obtained results show the merit of our approach in term of recall and precision rates and that our assumptions are well founded.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.