Abstract

High complexity of mobile cyber physical systems (MCPS) dynamics makes it difficult to apply classical methods to optimize the MCPS agent management policy. In this regard, the use of intelligent control methods, in particular, with the help of artificial neural networks (ANN) and multi-agent deep reinforcement learning (MDRL), is gaining relevance. In practice, the application of MDRL in MCPS faces the following problems: 1) existing MDRL methods have low scalability; 2) the inference of the used ANNs has high computational complexity; 3) MCPS trained using existing methods have low functional safety. To solve these problems, we propose the concept of a new MDRL method based on the existing MADDPG method. Within the framework of the concept, it is proposed: 1) to increase the scalability of MDRL by using information not about all other MCPS agents, but only about n nearest neighbors; 2) reduce the computational complexity of ANN inference by using a sparse ANN structure; 3) to increase the functional safety of trained MCPS by using a training set with uneven distribution of states. The proposed concept is expected to help address the challenges of applying MDRL to MCPS. To confirm this, it is planned to conduct experimental studies.

Highlights

  • With the advancement of deep reinforcement learning methods, more and more complex problems are falling into the area of interest of researchers

  • Single-agent deep reinforcement learning (SDRL) has established itself as a powerful and versatile tool for solving intellectual problems at a level comparable to that of a human [12,13,14]. These factors determine the relevance of the development of single-agent deep reinforcement learning (SDRL) for use in multi-agent systems (MAS) in the form of multi-agent deep reinforcement learning (MDRL)

  • High computational complexity of the ANN inference used to control mobile cyber physical systems (MCPS) agents requires an increase in the computational power of the agents, which leads to a significant increase in their cost

Read more

Summary

Introduction

With the advancement of deep reinforcement learning methods, more and more complex problems are falling into the area of interest of researchers. Single-agent deep reinforcement learning (SDRL) has established itself as a powerful and versatile tool for solving intellectual problems at a level comparable to that of a human [12,13,14] Taken together, these factors determine the relevance of the development of SDRL for use in MAS in the form of MDRL. Poor scalability of existing methods leads to a large amount of computational resources, a long time and / or high cost of the MDRL process with an increase in the number of MCPS agents. 2. High computational complexity of the ANN inference used to control MCPS agents requires an increase in the computational power of the agents, which leads to a significant increase in their cost.

Literature review
Materials and methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call