In modern versions of the Stroop task, participants view target words presented on a computer screen with pixels that are either congruent (e.g., “Red” in red pixels) or incongruent (e.g., “Red” in blue pixels) with the meaning of the target word. When participants report the target color, the difference in response time between congruent and incongruent targets (i.e., Stroop effect) is typically larger than when they report the target word (i.e., reverse Stroop effect); this is the classic Stroop asymmetry. For decades following Stroop’s experiments, the prevailing explanation for the asymmetry asserted that, for most people, word reading but not color naming has become automatic, so the target word should always become mentally accessible before the target color. Recent studies have argued instead that the advantage for the target word results not from automaticity but from a strong association between the identification task and verbal processing. To test the strength-of-association account, we developed Qualtrics scripts to deliver visual and auditory target words in Stroop and reverse Stroop tasks. The visually presented targets replicated the classic Stroop asymmetry, p < .001, ηp2 = .27. The auditorily presented targets extended the classic Stroop asymmetry to the auditory domain, p < .001, ηp2 = .18. These results support the argument that, for an identification task, the target’s semantic features enjoy an advantage over the target’s perceptual features regardless of the sensory modality to which the target is presented. In turn, this shows that task demands are more important than automaticity in mental processing.
Read full abstract