Abstract

People segment complex, ever-changing, and continuous experience into basic, stable, and discrete spatio-temporal experience units, called events. The literature on event segmentation investigates the mechanisms behind this ability. Event segmentation theory points out that people predict ongoing activities and observe prediction error signals to find event boundaries. In this study, we investigated the mechanism giving rise to this ability through a computational model and accompanying psychological experiments. Inspired by event segmentation theory and predictive processing, we introduced a self-supervised model of event segmentation. This model consists of neural networks that predict the sensory signal in the next time-step to represent different events, and a cognitive model that regulates these networks on the basis of their prediction errors. In order to verify the ability of our model in segmenting events, learning them during passive observation, and representing them in its representational space, we prepared a video of human behaviors represented by point-light displays. We compared the event segmentation behaviors of participants and our model with this video in two granularities. Using point-biserial correlation, we demonstrated that the event boundaries of our model correlated with the responses of the participants. Moreover, by approximating the representation space of participants, we showed that our model formed a similar representation space with those of participants. The result suggests that our model that tracks the prediction error signals can produce human-like event boundaries and event representations. Finally, we discuss our contribution to the literature and our understanding of how event segmentation is implemented in the brain.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call