Abstract

Identifying the behaviors of organisms is essential for an animal's survival. This ability is particularly challenged when the "actors" are dynamically occluded by other objects and become fragmented as they move through an environment. Even when fragmented in time and across space, humans readily recognize the behavior of these dynamically occluded objects and actors. How animals process such fragmented information, especially when involving motion, remains uncertain. In three experiments, we investigated the ability of six pigeons to discriminate between the running and walking actions of digital animal models when dynamically occluded. The pigeons were tested in a go/no-go procedure using three models that transited behind multiple occluders in a semirealistic scene. Without ever seeing the entirety of the animal model at one time, all the pigeons learned to discriminate among these two behaviors. This discrimination transferred to an unfamiliar model, transit direction, transiting rates, camera perspectives, and occluders. Tests with different static and dynamic features indicated that the pigeons relied on motion features for the discrimination, especially articulated motion. These experiments demonstrate that pigeons, like humans, can discriminate actions even when their view of the actor is fragmented in time and space.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call