Abstract

AbstractFocal animal sampling and continuous recording of behavior in situ are essential in the study of ecology. However, observation gaps and missing records are unavoidable because the focal individual can move out of sight and recording devices do not always work properly. Using an inverse reinforcement learning (IRL) framework, we have developed a novel gap‐filling method to predict the most likely route that an animal would have traveled; within this framework, an algorithm learns a reward function from animal trajectories to find the environmental features preferred by the animal. We applied this approach to GPS trajectories obtained from streaked shearwaters (Calonectris leucomelas) and provide evidence of the advantages of the IRL approach over previously used interpolation methods. These advantages are as follows: (1) No assumptions about the parametric distribution governing movements are needed, (2) no assumptions regarding landscape preferences and restrictions are needed, and (3) large spatiotemporal gaps can be filled. This work demonstrates how IRL can enhance the ability to fill gaps in animal trajectories and construct reward‐space maps in heterogeneous environments. The proposed methodology can assist movement research, which seeks to understand phenomena that are ecologically and evolutionarily significant, such as habitat selection and migration.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call