Abstract Benthic fishes are a common target of scientific monitoring but are difficult to quantify because of their close association to bottom habitats that are hard to access. Advances in image‐acquisition technologies, machine vision and deep learning have made capturing and quantifying fishes with cameras increasingly feasible. We present a method and open‐source software called ‘FishScale’ to estimate benthic fish lengths, numeric abundance and biomass density in underwater environments assessed with down‐looking monocular images. ‘FishScale’ estimates fish abundances and size frequencies from near‐nadir monocular images where fish have already been semantically segmented. The software accounts for lens distortion, underwater magnification effects and fish body curvature to automatically estimate fish lengths and the areas of images where they were captured. Numeric and biomass density are estimated through a deterministic machine vision algorithm that requires a user‐provided length‐weight relationship for species of interest and calibration images. Results from validation studies show that lengths and weights can be estimated with high accuracy and precision for round goby (Neogobius melanostomus) captured in distorted action camera images, and from large‐bodied lake trout (Salvelinus namaycush) imaged with a machine vision camera. The real‐world utility of the approach is demonstrated in a case study estimating round goby abundances and size frequencies along a 10.7‐km transect surveyed with an autonomous underwater vehicle in Lake Michigan, USA. Our validation studies demonstrate that the approach estimates benthic and benthopelagic fish lengths and weights with little bias and good accuracy and precision for species with much different body shapes and sizes. The method is applicable to data collected using a variety of nadir imaging approaches with widespread applications to fisheries monitoring and quantification of any species or object for which nadir images and working distances between the camera and feature of interest are available.
Read full abstract