Abstract

Subject of study. The study investigated the possibility of using neural network models of second-order visual mechanisms as inputs for neural network classifiers. Second-order visual mechanisms can detect spatial inhomogeneities in the contrast, orientation, and spatial frequency of an image. These mechanisms are traditionally considered one of the stages of early visual processing; their role in the perception of textures has been well studied. Aim of study. The study aimed to investigate whether the use of classifier input modules pretrained to demodulate the spatial modulations of luminance gradients contributed to object and scene classifications. Method. Neural network modeling was used as the main method. At the first stage of the study, a set of texture images was generated to train neural network models of second-order visual mechanisms. At the second stage, the object and scene samples were prepared, based on which classifier networks were trained. Pretrained models of second-order visual mechanisms with fixed weights were applied as these network inputs. Main results. The second-order information presented as a map of instantaneous values of the modulation function of contrast, orientation, and spatial frequency of the image was sufficient for the identification of only some of the scene classes. In general, the use of the values of luminance gradient modulation functions for object classification proved to be ineffective within the proposed neural network architectural framework. Thus, the hypothesis stating that second-order visual filters encode features enabling the identification of objects was not confirmed. This result makes it necessary to test an alternative hypothesis stating that the role of second-order filters is limited to the construction of saliency maps, and filters are windows through which information is received from the first-order filter outputs. Practical significance. The possibility of using second-order models of visual mechanisms in computer vision systems was assessed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call