Abstract
This study proposes a layerwise quantum-based deep reinforcement learning (LQ-DRL) method for optimizing continuous large space and time series problems using deep layer training. The actions in LQ-DRL are optimized using a layerwise quantum embedding that leverages the advantages of quantum computing to maximize reward and reduce training loss. Moreover, this study employs a local loss to minimize the occurrence of barren plateaus phenomena and further enhance performance. As a particular case, the proposed scheme is employed to jointly optimize: (1) UAV trajectory planning, (2) user grouping, and (3) power allocation for higher energy efficiency of a UAV as the reward. The combination of these optimized factors is referred to as action space in the presented LQ-DRL. The LQ-DRL is employed to solve the optimization problem due to its non-convexity, continuous and large action space, and time-series domain. In a practical view, LQ-DRL aims to solve the issue of energy consumption related to limited-battery energy of a UAV base station while maintaining quality-of-service (QoS) for users, by gaining maximum energy efficiency as the reward. One of real applications, as an example, LQ-DRL can be employed to maximize the energy efficiency of a UAV base station in UAV empowered disaster recovery networks scenario. The quantum circuits of layerwise quantum embedding are presented to show the practical implementation in noisy intermediate-scale quantum computers. Based on the results, LQ-DRL outperformed the classical DRL by achieving higher effective dimension, rewards, and lower learning losses. In addition, better performances were achieved using more layers.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.