Fire has been a ubiquitous natural visual stimulus throughout evolution. The swiftly-changing colour and luminance gradients, and the variety of shapes poses a challenge for dynamic form encoding. Here, we investigated human capacity for dynamic form encoding and recognition, testing the effect of delay and padding on temporal search for dynamic targets within fire clips. In Experiment 1 we used a delayed-match-to-sample task with clips drawn from video of a hearth fire. Subjects viewed a 1-second sample clip followed by a longer test clip, then judged whether the sample was in the test (yes/no). In Experiment 1, we varied the delay between sample and test (1,5,10,15 seconds), and clip orientation (upright, inverted). Performance decreased linearly with time from 67% correct (delay=1 second) to 58% correct (delay=15 seconds) (p<0.05, paired-samples t-test). Performance was not significantly different for inverted clips, showing a drop from 67% to 66% (p=0.3, paired-samples t-test). In Experiment 2 we adjusted the temporal position of the 1-second sample in the test clip (test-sample delay 1 second). The sample was preceded by a clip of length x and followed by a clip of length y. We varied x and y independently across {0.25, 0.75, 1.25, 2.25} seconds. The test clip thus lasted x+s+y seconds, whether the sample was present or not. Increasing the length of the post-stimulus padding from 0.25 s to 2.25 s caused a large drop in mean accuracy (from 75% to 52%, averaged across all pre-padding lengths; p=0.0073 by paired-sample t-test). We conclude that detection of target clips of fire degrades linearly with sample-test delay, but is not substantially degraded by inversion (Experiment 1). Furthermore, distractors cause error even when presented after the sample (Experiment 2), showing that dynamic samples of fire are not easily individuated.Meeting abstract presented at VSS 2014
Read full abstract