Abstract To address the shortcomings of traditional Long Short-Term Memory (LSTM) network in Non-Line-of-Sight (NLOS) mitigation, such as the large amount of training data required and the lengthy training times, in order to enhance the model's ability to process spatial features and multi-level features, this paper proposes an NLOS mitigation method based on Stacked Long Short-Term Memory (Stacked-LSTM) network and Convolutional Neural Network (CNN). This method combines CNN and Stacked-LSTM models to efficiently extract spatial and higher-level temporal features from the Channel Impulse Response (CIR) signal, reducing the input dimension and improving the performance of the model. The constructed CNN-Stacked-LSTM model is used to mitigate NLOS errors and reduce the impact of NLOS in the original ranging data. In the model performance validation experiment, the accuracy of the CNN-Stacked-LSTM model was improved by 4%-14% compared with the CNN-LSTM, Transformer, Attention-LSTM and LSTM models, and the training time was reduced by 0.07h compared with the traditional LSTM model. The experimental results in the two actual Ultra-WideBand (UWB) environments show that compared with the other four models, the RMSE value of the CNN-Stacked-LSTM model proposed in this paper is reduced by 19.55%-58.96% and 8.64%-45.52%, respectively. It has the best mitigation effect on NLOS and the highest positioning accuracy.
Read full abstract