Modern outdoor self-localizing computer vision applications require descriptors more than repeatability. The descriptors need to be invariant to light conditions and transformation changes to give support for efficient classification. This paper investigates a new framework based on the genetic algorithm to create and optimize extensible modular descriptors for specific outdoor environments. The algorithm returns descriptors with improved efficiency and classification performance. It controls the image processing and machine learning parameters and optimizes the descriptor size by activating the necessary modules. To show the strength of the descriptor, we compared it with the most commonly used standard descriptors on speed, accuracy, and invariance to light conditions, image resolution changes, scale, affine transformation, rotation, and classification. The results show that it has an average result in transformation invariance, and its description ability in sparse areas is significantly better than that of the most-used descriptors. The descriptor was also integrated into an augmented reality algorithm to create a self-regulating segmentation application.
Read full abstract