Abstract

Texture recognition is one of the most important branches in image research. This paper mainly aims to develop a new solution to address texture recognition using a Cellular Neural Network (CellNN). Firstly, it proposes an improved model of CellNN by the binary constraints of local receptive fields, and then designs a recurrent convolution framework of such a model to generate two types of texture feature maps, including state feature maps and output feature maps. In order to obtain low-dimensional features, state feature maps are further compressed by the mapping of rotation-invariant patterns and the merging of low-frequency occurrence patterns. By the statistics of joint-distribution patterns, state feature maps and output feature maps are fused together to generate the features of single resolution. Moreover, a multi-resolution feature combination scheme is also designed by the optimization of softmax& variance and concatenation of multiple features. Finally, a fully-connected neural network is trained to work as a texture recognizer. The experimental comparisons of totally 15 algorithms on five benchmark datasets show that, on the dataset whose texture-class quantity is not beyond 30, such as Brodatz, our method could always acquire the highest recognition accuracy, outperforming any other compared ones. On the big dataset with huge texture-class quantity, such as ALOT, our method could also surpass any other non-deep-learning one, such as the state-of-the-art gLBP, only slightly falling behind the best two deep-learning ones, FV-Alex and FV-VGGVD. However, in terms of time cost, our method could always outperform any deep-learning one in feature extraction stage, and also surpass any compared one except original LBP in feature matching.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call