Abstract

Dynamic Spectrum Access (DSA) solutions equipped with spectrum prediction can enable proactive spectrum management and tackle the increasing demand for radio frequency (RF) bandwidth. Among various prediction techniques, Long Short-Term Memory (LSTM) is a deep learning model that has demonstrated high performance in forecasting spectrum characteristics. Although well-performing, the theoretical characterization of LSTM prediction performance has not been well developed in the literature. Therefore, in this article, we examine an LSTM based temporal spectrum prediction model and characterize its prediction performance through theoretical analysis. To this end, we analyze the LSTM prediction outputs over simulated Markov-model-based spectrum data and spectrum measurements data. Our results suggest that the predicted scores of the LSTM based system model can be described using mixtures of truncated Gaussian distributions. We also estimate the performance metrics using the mixture model and compare the results with the observed prediction performance over simulated and measured datasets.

Highlights

  • The results suggest that the Long Short-Term Memory (LSTM) model outperforms the baseline Auto-Regressive Integrated Moving Average (ARIMA), Moving Average, and Naïve models even at low precision of number representation

  • We propose a mixture of truncated Gaussian distributions to model the prediction score outputs and estimate the performance, such as the probability of error, using the proposed mixture distribution

  • We propose that the prediction scores of the LSTM based system model for the output classes can be modelled well using a mixture of truncated Gaussian distributions

Read more

Summary

BACKGROUND AND MOTIVATION

C URRENT regulatory measures for spatial and temporal allocation of radio frequency (RF) spectrum have inadequacies that result in the under-utilization of frequencies and the inability to manage the growing bandwidth demand due to an abundance of devices and applications [1], [2]. Various works establish the superior capabilities of LSTM models to leverage correlations in the observations and predict more accurately, the theoretical analysis and insights regarding predictions have not been well developed in the literature These aspects motivate us in this article to extensively analyze the prediction performance of LSTM for the temporal prediction of the future spectrum occupancy status. To reduce the computational cost and improve the scalability of an LSTM based model, the dimensionality of spectrum data is reduced using tensor decomposition in [7] Their proposed method achieves lower normalized prediction error than the baseline models such as AR, Support Vector Machine (SVM), CNN, and LSTM models. We analyze the LSTM prediction outputs for a Markovian spectrum occupancy data and propose a mixture model based analytical form of prediction performance metrics such as the probability of error.

SYSTEM MODEL
MODELLING THE DISTRIBUTIONS OF LSTM
MODEL VERIFICATION AND NUMERICAL RESULTS
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call