Abstract

ABSTRACTIn recent trends, the development of 6G technology has provided a critical focus on its core network architecture which facilitates the robust foundation for entire network. A solution of self‐evolving networks for 6G is under consideration, in which networks develop their autonomy as a key perspective in the decision‐making process. Till now, various decision‐making mechanisms are identified, such as their key features, individual theoretical analyzes, and customized for 6G networks. To reduce the network load and improve system utilization, a decision‐making based on Rainbow Deep Q‐Network (RDQN) is proposed in this research. This methodology considers distributed decision‐making scenarios in which IoT devices enhance the quality of experience (QoE) with low training. Furthermore, it aims at a community of self‐evolving networks to show the improvements in 6G technologies and accelerates the learning rate. From the result analysis, it evidently displays that proposed RDQN has better training rate than other algorithms and requires fewer rounds of approximately 20 for a high number of episodes (500). Similarly, while evaluating QoE utility value, the proposed RDQN demonstrates a higher value of 100 when the number of target customers is large which outperforms the other existing models. The above results prove that the proposed RDQN is more efficient in IoT traffic monitoring and more suitable for the 6G environment.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.