Abstract

Although categorization can take place at different levels of abstraction, classic studies on semantic labeling identified the basic level, for example, dog, as entry point for categorization. Ultrarapid categorization tasks have contradicted these findings, indicating that participants are faster at detecting superordinate-level information, for example, animal, in a complex visual image. We argue that both seemingly contradictive findings can be reconciled within the framework of parallel distributed processing and its successor Leabra (Local, Error-driven and Associative, Biologically Realistic Algorithm). The current study aimed at verifying this prediction in an ultrarapid categorization task with a dynamically changing presentation time (PT) for each briefly presented object, followed by a perceptual mask. Furthermore, we manipulated two defining task variables: level of categorization (basic vs. superordinate categorization) and object presentation mode (object-in-isolation vs. object-in-context). In contradiction with previous ultrarapid categorization research, focusing on reaction time, we used accuracy as our main dependent variable. Results indicated a consistent superordinate processing advantage, coinciding with an overall improvement in performance with longer PT and a significantly more accurate detection of objects in isolation, compared with objects in context, at lower stimulus PT. This contextual disadvantage disappeared when PT increased, indicating that figure-ground separation with recurrent processing is vital for meaningful contextual processing to occur.

Highlights

  • To understand our visual surroundings, we need to be able to categorize the complex visual input as efficiently as possible

  • This was in line with previous studies on ultrarapid categorization without perceptual masking (e.g., Poncet & Fabre-Thorpe, 2014; Prab et al, 2014; Vanmarcke & Wagemans, 2015), and the predictions formulated based on the Leabra model (O’Reilly et al, 2013)

  • This model stated that a predefined search task would allow top-down biasing of the relevant visual features even when stimulus presentation time (PT) lasted long enough to allow recurrent processing to influence the initial bottom-up sweep of information in the visual cortex (Bar, 2004; De Cesarei et al, 2015)

Read more

Summary

Introduction

To understand our visual surroundings, we need to be able to categorize the complex visual input as efficiently as possible. A semantic category can be defined as a group of two or more objects with different attributes, properties, or qualities, which are treated with regard to their meaning. Within the hierarchical organization of semantic information, categorization can take place at different levels of abstraction (Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976). The same object or scene can be categorized at a more general, superordinate level and at a less general, basic level of abstraction. Rosch et al (1976) defined the basic object level as the level of categorization at which the categories can mirror the structure of attributes perceived in the world by sharing a common shape The same object or scene can be categorized at a more general, superordinate level and at a less general, basic level of abstraction. Rosch et al (1976) defined the basic object level as the level of categorization at which the categories can mirror the structure of attributes perceived in the world by sharing a common shape.

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.