Abstract

Dynamic Textures (DTs) are sequences of images that exhibit stationarity properties in time such as waves, fire, and water fountain. In particular, DTs for scene classification are fundamental to understanding natural video content. However, the extraction of robust and powerful features is a crucial step towards solving these tasks. In this paper, we suggested a novel approach to extracting and learning texture features from natural scenes. First, we model the textures using Local Binary Patterns on Three Orthogonal Planes (LBP-TOP) as well as Volume Local Binary Patterns (VLBP) respectively. Thereafter, we use those descriptors for training Deep Belief Networks (DBNs) to perform the scene classification. Our experimental results on a wide range of commonly used DTs classification benchmark datasets, including Dyntex++ and UCLA prove the robustness of our approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call