ABSTRACTIn recent trends, the development of 6G technology has provided a critical focus on its core network architecture which facilitates the robust foundation for entire network. A solution of self‐evolving networks for 6G is under consideration, in which networks develop their autonomy as a key perspective in the decision‐making process. Till now, various decision‐making mechanisms are identified, such as their key features, individual theoretical analyzes, and customized for 6G networks. To reduce the network load and improve system utilization, a decision‐making based on Rainbow Deep Q‐Network (RDQN) is proposed in this research. This methodology considers distributed decision‐making scenarios in which IoT devices enhance the quality of experience (QoE) with low training. Furthermore, it aims at a community of self‐evolving networks to show the improvements in 6G technologies and accelerates the learning rate. From the result analysis, it evidently displays that proposed RDQN has better training rate than other algorithms and requires fewer rounds of approximately 20 for a high number of episodes (500). Similarly, while evaluating QoE utility value, the proposed RDQN demonstrates a higher value of 100 when the number of target customers is large which outperforms the other existing models. The above results prove that the proposed RDQN is more efficient in IoT traffic monitoring and more suitable for the 6G environment.
Read full abstract