Abstract

Finding the optimal beam pair and update time in 5G systems operating at mmWave frequencies is time-intensive and resource-demanding. This intricate procedure calls for the proposal of more intelligent approaches. Therefore, this work proposes a machine learning-based method for optimizing beam pair selection and its update time. The method is structured around three main modules: spatial characterization of beam pair service areas, training of a machine learning model using collected beam pair data, and an algorithm that uses the decision function of the trained model to compute the optimal update time for beam pairs based on the spatial position and velocity of user equipment. When the machine learning model is deployed in a network with a single gNB equipped with a UPA and one UE equipped with a UPA in an mmWave scenario simulated in NS3, improvements in SINR and throughput up to , were observed. Improvements are gathered because of a reduction of in beam pair selections because of an increase of approximately in the effective time between successive beam pair searches. This method could offer real-time optimization of the beam pair procedures in 5G networks and beyond.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.