Abstract

People divide their ongoing experience into meaningful events. This process, event segmentation, is strongly associated with visual input: when visual features change, people are more likely to segment. However, the nature of this relationship is unclear. Segmentation could be bound to specific visual features, such as actor posture. Or, it could be based on changes in the activity that are correlated with visual features. This study distinguished between these two possibilities by examining whether segmentation varies across first- and third-person perspectives. In two experiments, observers identified meaningful events in videos of actors performing everyday activities, such as eating breakfast or doing laundry. Each activity was simultaneously recorded from a first-person perspective and a third-person perspective. These videos presented identical activities but differed in their visual features. If segmentation is tightly bound to visual features then observers should identify different events in first- and third-person videos. In addition, the relationship between segmentation and visual features should remain unchanged. Neither prediction was supported. Though participants sometimes identified more events in first-person videos, the events they identified were mostly indistinguishable from those identified for third-person videos. In addition, the relationship between the video’s visual features and segmentation changed across perspectives, further demonstrating a partial dissociation between segmentation and visual input. Event segmentation appears to be robust to large variations in sensory information as long as the content remains the same. Segmentation mechanisms appear to flexibly use sensory information to identify the structure of the underlying activity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call