Abstract

Camera traps are a widely used tool for monitoring wildlife with minimal human intervention. Their number can amount to several hundred, and the accumulated volume can reach several terabytes. Often, photos and videos contain empty frames that are created by accidental triggering of camera trap detectors, such as by wind. The staff of nature reserves must process the images manually and sort them by animal species. In our study we propose to consider a technology for analysing data from camera traps using a two-stage neural network processing. The task of the first stage was to separate empty images from non-empty images. To do this, using a comparative analysis, we identified the most optimal detector model from the YOLO series. The task of the second stage was to classify the objects found by the detector. For this purpose, a comparative analysis of the architectures of classifiers from the ResNet series was carried out. Based on the selected algorithms, a two-stage system for processing data from camera traps was created in the form of a graphical interface with the ability to work on any operating system. The software will significantly reduce the processing time of data from camera traps and simplify environmental analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call