Abstract

At the core of vision research is the notion of perceptual invariance. The question of how the visual system is able to develop stable or invariant states through the ever transforming environment is central to understanding the brain's recognition process. The coined term slowness principle used in slow feature analysis is a reference to the brain's ability to generate slow changing and thus stable percepts in response to the fast varying visual stimulations in the environment. Based on this principle this paper deals with categorization of video sequences composed of dynamic natural scenes. Unlike models relying on supervised learning or handcrafted descriptors, we represent videos using unsupervised learning of motion features. Our method is based on: 1) Slow feature analysis principle from which motion features representing the principal and more stable motion components of training videos are learned. 2) Integration of the local motion feature into a global classification architecture. Classification experiments produce 11% and 19% improvements compared to state-of-the-art methods on two dynamic natural scenes data sets. A quantitative and qualitative analysis illustrates how the learned slow features untangle the input manifolds and remain stable under various parameters settings.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.