Abstract

We present a method for motion-based video segmentation and segment classification as a step towards video summarisation. The sequential segmentation of the video is performed by detecting changes in the dominant image motion, assumed to be related to camera motion. It is achieved by analysing the temporal variations of coefficients of the global 2D affine motion model (robustly) estimated. The obtained video segments supply reasonable temporal regions to apply a classification algorithm. To this end, we adopt a statistical representation of the residual motion content of the video scene, relying on the distribution of temporal cooccurrences of local motion-related measurements. Pre-identified classes of dynamic events are learned off-line from a training set of video samples of the genre of interest. Each video segment is then classified according to a maximum likelihood (ML) principle. Finally, excerpts of the relevant classes can be selected for video summarisation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call