Abstract

The water depth bias of LiDAR point cloud data has to be corrected. The previous models, which used the fixed parameters for an entire water area, cannot efficiently correct the bias due to the different water environments. Therefore, this paper develops an adaptive model for the correction of the water depth bias. A coordinate system, in which the water depth is taken as the X-axis and the water depth bias is taken as the Y-axis, is defined. All of the sample points are normalized, and then projected into the defined coordinate system according to their water depths and water depth biases. Second, the scatter points are clustered into several different clusters using the developed subdivision algorithm. With the clusters, an entire water area is subdivided into several sub-regions. Finally, each sub-region is fitted using a model, which is used to correct the water depth bias. Experimental verification and comparison analysis are conducted in three different environments: an indoor tank, an outdoor pond and the Beihai sea. The experimental results demonstrate that the Mean Absolute Error (MAE) and the Root Mean Squared Error (RMSE) from the adaptive model are reduced by approximately 61% and 59%, respectively, relative to those from the traditional models. Therefore, it can be concluded that the proposed model can adapt to changes of the different water environments and achieve a higher accuracy for water depth bias correction than traditional methods do.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call