Abstract

Aquaculture farming can help soften the environmental impact of overfishing by fulfilling seafood demands with farmed fishes. However, to maintain big scale farms can be challenging, even with the help of underwater cameras affixed in farm cages, because there are hours’ worth of footages to sift through, which can be a laborious task if performed manually. Vision-based system therefore could be deployed to filter useful information from video footages, automatically. This work proposed to solve the above mentioned problems by deploying the; 1) Extended UTAR Aquaculture Farm Fish Monitoring System Framework (UFFMS), being the handcrafted method, and 2) Faster Region Convolutional Neural Network (Faster RCNN), being the CNN-based method, for fish detection. These two methods could extract information about fishes from video footages. Experimental results show that for well-lit footages, Faster RCNN performs better, compared to the extended-UFFMS. However, accuracy of Faster RCNN drops drastically for non-well-lit footages, at an average of 28.57%, despite still having perfect precision scores.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call