Abstract
Camera traps have revolutionized wildlife resource surveys by enabling the acquisition of comprehensive ecosystem information. Camera traps usually produce massive images. Snapshot datasets captured in nature reserves proximate to human habitation often comprise a significant volume of human activity images containing humans or livestock. Manually identifying and labeling wildlife images from large-scale datasets is a labor-intensive task that necessitates a substantial number of professionals and incurs expensive personnel costs. By harnessing the capabilities of deep learning technology to automatically differentiate between wildlife and human activity images, ecologists can solely focus on manually labeling wildlife images that account for a minor fraction of the dataset. This strategic shift can significantly reduce personnel expenses and enhance work efficiency. Existing research usually treats human activity images as ordinary categories and utilizes species recognition methods to automatically identify and filter them. However, when human activity images overwhelmingly predominate the dataset, established species recognition methods are susceptible to misclassifying a substantial proportion of wildlife images as human activity images. This misclassification can potentially result in ecologists missing opportunities to discover or observe wildlife. To tackle this challenge, we proposed an ensemble learning method based on a conservative strategy and current mainstream deep learning frameworks to automatically identify wildlife and human activity images. We validated our method on a camera trap dataset from Lasha Mountain (LSM) in Yunnan, China. The experimental results demonstrated that our method automatically identified wildlife images from the dataset with accuracy, recall, and precision of 95.75%, 94.07%, and 83.89%, respectively. This led to an approximately 80% reduction in personnel costs.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.