Abstract

Fast and invariant feature extraction is crucial in certain computer vision applications where the computation time is constrained in both training and testing phases of the classifier. In this paper, we propose a nature-inspired dimensionality reduction technique for fast and invariant visual feature extraction. The human brain can exchange the spatial and spectral resolution to reconstruct missing colors in visual perception. The phenomenon is widely used in the printing industry to reduce the number of colors used to print, through a technique, called color dithering. In this work, we adopt a fast error-diffusion color dithering algorithm to reduce the spectral resolution and extract salient features by employing novel Hessian matrix analysis technique, which is then described by a spatial-chromatic histogram. The computation time, descriptor dimensionality and classification performance of the proposed feature are assessed under drastic variances in orientation, viewing angle and illumination of objects comparing with several different state-of-the-art handcrafted and deep-learned features. Extensive experiments on two publicly available object datasets, coil-100 and ALOI carried on both a desktop PC and a Raspberry Pi device show multiple advantages of using the proposed approach, such as the lower computation time, high robustness, and comparable classification accuracy under weakly supervised environment. Further, it showed the capability of operating solely inside a conventional SoC device utilizing a small fraction of the available hardware resources.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call