Coral reefs are vital to marine biodiversity but are increasingly threatened by global climate change and human activities, leading to significant declines in live coral cover (LCC). Monitoring LCC is crucial for assessing the health of coral reef ecosystems and understanding their degradation and recovery. Traditional methods for estimating LCC, such as the manual interpretation of underwater survey videos, are labor-intensive and time-consuming, limiting their scalability for large-scale ecological monitoring. To overcome these challenges, this study introduces an innovative deep learning-based approach that utilizes semantic segmentation to automatically interpret LCC from underwater videos. That is, we enhanced PSPNet for live coral segmentation by incorporating channel and spatial attention mechanisms, along with pixel shuffle modules. Experimental results demonstrated that the proposed model achieved a mean Intersection over Union (mIoU) of 89.51% and a mean Pixel Accuracy (mPA) of 94.47%, showcasing superior accuracy in estimating LCC compared to traditional methods. Moreover, comparisons indicated that the proposed model aligns more closely with manual interpretations than other models, with an mean absolute error of 4.17%, compared to 5.89% for the original PSPNet, 6.03% for Deeplab v3+, 7.12% for U-Net, and 6.45% for HRNet, suggesting higher precision in LCC estimation. By automating the estimation of LCC, this deep learning-based approach can greatly enhance efficiency, thereby contributing significantly to global conservation efforts by enabling more scalable and efficient monitoring and management of coral reef ecosystems.
Read full abstract