Abstract

The goodness of the features extracted from the instances and the number of training instances are two key components in machine learning, and building an effective model is largely affected by these two factors. Acquiring a large number of training instances is very expensive in some situations such as in the medical domain. Designing a good feature set, on the other hand, is very hard and often requires domain expertise. In computer vision, image descriptors have emerged to automate feature detection and extraction; however, domain-expert intervention is typically needed to develop these descriptors. The aim of this paper is to utilize genetic programming to automatically construct a rotation-invariant image descriptor by synthesizing a set of formulas using simple arithmetic operators and first-order statistics, and determining the length of the feature vector simultaneously using only two instances per class. Using seven texture classification image datasets, the performance of the proposed method is evaluated and compared against eight domain-expert hand-crafted image descriptors. Quantitatively, the proposed method has significantly outperformed, or achieved comparable performance to, the competitor methods. Qualitatively, the analysis shows that the descriptors evolved by the proposed method can be interpreted.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.