Nearshore bathymetric data are essential for assessing coastal hazards, studying benthic habitats and for coastal engineering. Traditional bathymetry mapping techniques of ship-sounding and airborne LiDAR are laborious, expensive and not always efficient. Multispectral and hyperspectral remote sensing, in combination with machine learning techniques, are gaining interest. Here, the nearshore bathymetry of southwest Puerto Rico is estimated with multispectral Sentinel-2 and hyperspectral PRISMA imagery using conventional spectral band ratio models and more advanced XGBoost models and convolutional neural networks. The U-Net, trained on 49 Sentinel-2 images, and the 2D-3D CNN, trained on PRISMA imagery, had a Mean Absolute Error (MAE) of approximately 1 m for depths up to 20 m and were superior to band ratio models by ~40%. Problems with underprediction remain for turbid waters. Sentinel-2 showed higher performance than PRISMA up to 20 m (~18% lower MAE), attributed to training with a larger number of images and employing an ensemble prediction, while PRISMA outperformed Sentinel-2 for depths between 25 m and 30 m (~19% lower MAE). Sentinel-2 imagery is recommended over PRISMA imagery for estimating shallow bathymetry given its similar performance, much higher image availability and easier handling. Future studies are recommended to train neural networks with images from various regions to increase generalization and method portability. Models are preferably trained by area-segregated splits to ensure independence between the training and testing set. Using a random train test split for bathymetry is not recommended due to spatial autocorrelation of sea depth, resulting in data leakage. This study demonstrates the high potential of machine learning models for assessing the bathymetry of optically shallow waters using optical satellite imagery.
Read full abstract