Abstract

Monitoring giant panda (Ailuropoda melanoleuca) behaviour is critical for their conservation and understanding their health conditions. Currently, captive giant panda behaviour is usually monitored by their caregivers. In previous studies, researchers observed panda behaviours for short time spans over a period. However, both caregivers and researchers cannot monitor them 24-h using traditional methods of observation. In other words, animal behaviour data are difficult to collect over long periods and are prone to errors when recorded manually. Some researchers have used wearable devices such as accelerometer ear tags and collar-mounted units with a global position system (GPS) receiver and contactless devices such as depth cameras and video cameras for understanding behaviour of other animals such as primates and American white pelicans. However, the giant panda, an icon of endangered species conservation, is almost completely neglected in these studies. To monitor giant panda behaviour effectively, a fully automated giant panda behaviour recognition method based on Faster R–CNN and two modified ResNet was created. The Faster R–CNN network was able to detect panda bodies and panda faces in images. One of the modified ResNet was trained to classify their behaviour into five classes, walking, sitting, resting, climbing, and eating and the other to recognise whether the panda’s eyes and mouth were opened or closed. Experiments were conducted on 10,804 images collected from over 218 pandas in various environments and illumination conditions. The experimental results were very encouraging and achieved an overall accuracy of 90% for the five panda behaviours and an overall accuracy of 84% for the subtle panda facial motions. The proposed method provides an effective way to monitor giant panda behaviour in captivity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call