Abstract

Efficiently representing spatio-temporal features of dynamic textures (DTs) in videos has been restricted due to negative impacts of the well-known issues of environmental changes, illumination, and noise. In order to mitigate those, this paper proposes a new approach for an efficient DT representation by addressing the following novel concepts. Firstly, a novel filtering kernel, called Difference of Derivative Gaussians (DoDG), is introduced for the first time based on high-order derivative of a Gaussian kernel. It allows to point out DoDG-based filtered outcomes which are prominently resistant to noise for DT representation compared to exploiting the conventional Difference of Gaussians (DoG). A new framework in low computational complexity is then presented to take DoDG into account video denoising as an effective preprocessing of DT encoding. Finally, a simple variant of Local Binary Patterns (LBPs) is addressed to extract local features from these DoDG-filtered outcomes for constructing discriminative DoDG-based descriptors in small dimension, expected as one of appreciated solutions for mobile applications. Experimental results for DT recognition have verified that our proposal significantly performs well compared to all non-deep-learning methods, while being very close to deep-learning approaches. Also, ours are eminently better than those based on the traditional DoG.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call