Using manual responses, human participants are remarkably fast and accurate at deciding if a natural scene contains an animal, but recent data show that they are even faster to indicate with saccadic eye movements which of 2 scenes contains an animal. How could it be that 2 images can apparently be processed faster than a single image? To better understand the origin of this speed advantage in forced-choice categorization, the present study used a masking procedure to compare 4 tasks in which sensory, decisional, and motor aspects were systematically varied. With stimulus onset asynchronies (SOAs) above 40 ms, there were substantial differences in sensitivity between tasks, as determined by d' measurements, with an advantage for tasks using a single image. However, with SOAs below 30-40 ms, sensitivity was similar for all experiments, despite very large differences in reaction time. This suggests that the initial part of the sensory encoding relies on common and parallel processing across a large range of tasks, whether participants have to categorize the image or locate a target in 1 of 2 scenes.
Read full abstract