Abstract

Ultra-reliable low-latency communication (URLLC) is one of the promising services offered by fifth-generation technology for an industrial wireless network. Moreover, reinforcement learning is gaining attention due to its potential to learn from observed as well as unobserved results. Industrial wireless nodes (IWNs) may vary dynamically due to inner or external variables and thus require a depreciation of the dispensable redesign of the network resource allocation. Traditional methods are explicitly programmed, making it difficult for networks to dynamically react. To overcome such a scenario, deep Q-learning (DQL)-based resource allocation strategies as per the learning of the experienced trade-offs' and interdependencies in IWN is proposed. The proposed findings indicate that the algorithm can find the best performing measures to improve the allocation of resources. Moreover, DQL further reinforces to achieve better control to have ultra-reliable and low-latency IWN. Extensive simulations show that the suggested technique leads to the distribution of URLLC resources in fairness manner. In addition, the authors also assess the impact on resource allocation by the DQL's inherent learning parameters.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.