Abstract Automated cameras (including camera traps) are an established observation tool, allowing, for example the identification of behaviours and monitoring without harming organisms. However, limitations including imperfect detection, insufficient data storage and power supply restrict the use of camera traps, making inexpensive and customizable solutions desirable. We describe a camera system and evaluation toolset based on Raspberry Pi computers and YOLOv5 that can overcome those shortcomings with its modular properties. We facilitate the set‐up and modification for researchers via detailed step‐by‐step guides. A customized camera system prototype was constructed to monitor fast‐moving organisms on a continuous schedule. For testing and benchmarking, we recorded mason bees (Osmia cornuta) approaching nesting aids on 20 sites. To efficiently process the extensive video material, we developed an evaluation toolset utilizing the convolutional neural network YOLOv5 to detect bees in the videos. In the field test, the camera system performed reliably for more than a week (2 h per day) under varying weather conditions. YOLOv5 detected and classified bees with only 775 original training images. Overall detection reliability varied depending on camera perspective, site and weather conditions, but a high average detection precision (78%) was achieved, which was confirmed by a human observer (80% of algorithm‐based detections confirmed). The customized camera system mitigates several disadvantages of commercial camera traps by using interchangeable components and incorporates all major requirements a researcher has for working in the field including moderate costs, easy assembly and an external energy source. We provide detailed user guides to bridge the gap between ecology, computer science and engineering.
Read full abstract