Accurate methods for detecting dead fish are of great significance for fisheries because of their potential to improve production and reduce water pollution. Small-scale fisheries typically apply manual observation, but this approach is labor-intensive and time-consuming, which limits its application on large-scale farms. Advances in computer vision tools have provided the foundation for revolutionary methods of automated identification of individual-animal behavior. However, computer vision systems often suffer from unstable accuracy, low efficiency, and limited detection capability. Thus, the aim of this study was to design a deep learning system, designated “Deadfish-YOLO”, to detect dead fish based solely on standard underwater camera images with no additional hardware. First, eight selected rearing tanks were monitored by a self-developed computer version system. From these tank images, a dataset containing 18,114 frames was built by manual labeling. Next, a lightweight backbone network was generated with YOLOv4 to ensure fast computations, and an attention mechanism was introduced into the model to suppress unimportant features. Finally, the ReLU-memristor-like activation function was adopted to improve neural-network performance. The accuracy and processing speed of Deadfish-YOLO were superior to those of other state-of-the-art single- and two-stage detection models; Deadfish-YOLO ran at 85 frames per second with a mean average precision of 0.946 and an average ratio of 0.924 between intersection and union. These results demonstrate that Deadfish-YOLO may be used to automatically monitor dead fish in real circulating aquaculture systems. In addition, the results of this study should facilitate the wider application of artificial-intelligence-based animal monitoring in aquaculture.
Read full abstract