Abstract

Ray-tracing techniques offer accurate predictions on path loss but suffer from high computational complexity. To have a fast and accurate path loss prediction, this article applies a deep learning-based image-to-image translation technique to construct a path loss model in urban environments. The proposed method combines a variational autoencoder with a generative adversarial network to translate images from the domain of street maps to the domain of path loss. It is trained in a supervised manner using paired samples, where the input is the street map with 3-D building information and the output is the path loss in the area obtained from the ray-tracing model. Based on a realistic digital map of urban Taipei city, simulation results show that the proposed model outperforms conventional ones when operating at the 3.5 GHz frequency band. The standard deviation of prediction error is reduced by over 62%. Besides prediction accuracy, the proposed model has the advantage of low computational complexity over ray-tracing techniques. Hence, it has great potential for the deployment of unmanned aerial vehicle-mounted base stations (UAV-BSs) for future communication systems. In this future work, the optimal UAV mobility can be determined upon the rapid evaluation of the UAV-BS coverage using the proposed model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call