Abstract

Second-order visual filters are the mechanisms which preattentively combine the rectified outputs of first-order filters (the linear striate neurons). This allows them to select the image areas which are characterized by spatial heterogeneity of the local visual features. The aim of our research is to determine whether information from these areas may be sufficient to detect unfamiliar faces and to distinguish their gender. In our experiments we used digital photos of real living things or artificial objects and faces. All these images were adjusted to an average luminance, contrast and size (7 angle degree) and were processed to extract the areas which differ the most in contrast, orientation, and spatial frequency in each of the six spatial frequencies (0.5, 1, 2, 4, 8, and 16 cpd). The other image parts were adjusted to the background. The obtained pictures were presented in a random sequence. The observer had to say what he/she saw after each presentation. When a face was presented the observer’s answer could be assigned to one of the categories: ‘it is not clear’, ‘head’, ‘human face’, ‘male / female’. We found that the information contained in the image areas with a spatial heterogeneity of the local features is sufficient not only for detecting a face, but also for distinguishing its gender. The best results were obtained at a carrier frequency of 2 cpd. The results were a little bit worse at 0.5 and 1 cpd. However, the information extracted from the high-frequency half of the spectrum was significantly less useful. The obtained results allow us to suggest that the information transmitted by the second-order visual filters may be used for pattern recognition.

Highlights

  • The issue of visual image formation has a long history

  • According to the first one, an image is a holistic description in which the most typical object of its class is taken as a standard

  • Until now it’s remained unknown how could the invariance of holistic descriptions be provided and what could be presented as separation features

Read more

Summary

Introduction

The issue of visual image formation has a long history. Until recently, there hasn’t been proposed a theory that could explain everything on that matter. According to the first one, an image is a holistic description in which the most typical object of its class is taken as a standard The second point of view came from a holistic image description, but takes an average description of an object of its class as standard. The third theory assumes that every image can be described using the summation of its features. Until now it’s remained unknown how could the invariance of holistic descriptions be provided and what could be presented as separation features

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call