Abstract

Animal behavior analysis plays a crucial role in contemporary neuroscience research. However, the performance of the frame-by-frame approach may degrade in scenarios with occlusions or motion blur. In this study, we propose a spatiotemporal network model based on YOLOv8 to enhance the accuracy of key-point detection in mouse behavioral experimental videos. This model integrates a time-domain tracking strategy comprising two components: the first part utilizes key-point detection results from the previous frame to detect potential target locations in the subsequent frame; the second part employs Kalman filtering to analyze key-point changes prior to detection, allowing for the estimation of missing key-points. In the comparison of pose estimation results between our approach, YOLOv8, DeepLabCut and SLEAP on videos of three mouse behavioral experiments, our approach demonstrated significantly superior performance. This suggests that our method offers a new and effective means of accurately tracking and estimating pose in mice through spatiotemporal processing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call