The last decade has witnessed the explosive growth of the internet of things (IoT), demonstrating the utilization of ubiquitous sensing and computation services. Hence, the industrial IoT (IIoT) is integrated into IoT devices. IIoT is concerned with the limitation of computation and battery life. Therefore, mobile edge computing (MEC) is a paradigm that enables the proliferation of resource computing and reduces network communication latency to realize the IIoT perspective. Furthermore, an open radio access network (O-RAN) is a new architecture that adopts a MEC server to offer a provisioning framework to address energy efficiency and reduce the congestion window of IIoT. However, dynamic resource computation and continuity of task generation by IIoT lead to challenges in management and orchestration (MANO) and energy efficiency. In this article, we aim to investigate the dynamic and priority of resource management on demand. Additionally, to minimize the long-term average delay and computation resource-intensive tasks, the Markov decision problem (MDP) is conducted to solve this problem. Hence, deep reinforcement learning (DRL) is conducted to address the optimal handling policy for MEC-enabled O-RAN architectures. In this study, MDP-assisted deep q-network-based priority/demanding resource management, namely DQG-PD, has been investigated in optimizing resource management. The DQG-PD algorithm aims to solve resource management and energy efficiency in IIoT devices, which demonstrates that exploiting the deep Q-network (DQN) jointly optimizes computation and resource utilization of energy for each service request. Hence, DQN is divided into online and target networks to better adapt to a dynamic IIoT environment. Finally, our experiment shows that our work can outperform reference schemes in terms of resources, cost, energy, reliability, and average service completion ratio.
Read full abstract