It is known that the rotation, scaling and translation invariant property of image moments has a high significance in image recognition. For this reason, the seven invariant moments presented by Hu are widely used in the field of image analysis. These moments are of finite order; therefore, they do not comprise a complete set of image descriptors. For this reason, we introduce in this paper another series of invariant moments of infinite order, which are based on normalized central moments. The non-orthogonal property of these moments causes the redundancy of information. To overcome this problem, we propose a new construction technique of non-separable orthogonal polynomials in two variables based on a recurrence formula and we present a new set of orthogonal moments, which are invariant to translation, scaling and rotation. The presented approaches are tested in several well-known computer vision datasets including moment’s invariability, image retrieval and classification of objects, this latter based on fuzzy K-means clustering algorithm. The performance of these invariant moments for classification and image retrieval is compared with some recent invariant moments such as invariants of multi-channel orthogonal radial-substituted Chebyshev moments, invariants of quaternion radial-substituted Chebyshev moments, invariants of rotational moments in Radon space and Legendre–Fourier moments in Radon space. The experimental results made using four databases of images, namely Columbia Object Image Library (COIL-20) database, MPEG7-CE shape database, COIL-100 database and ORL database, show that our orthogonal invariant moments have done better than the other descriptors tested.
Read full abstract