Abstract

Smart systems are often battery-constrained, and compete for resources from remote clouds, which results in high delay. Collaboratively sharing resource among neighbors in proximity is promising to control such delay for time-sensitive applications. Rather few existing studies focus on the design between ubiquitous cooperation and competition with learning-enable incentives. In this article, intelligent algorithms are introduced in a distributed fashion, which encapsulates cooperation and competition to coordinate the overall goal of the cellular system with individual goals of Internet of Things (IoT) devices. First, the utility function of the cell and IoT users are designed, respectively. For the former, an incentive mechanism is constructed, where a novel deep actor-critic learning algorithm is developed with a prioritized queue for continuous action space in the differentiated decision-making procedure. For the latter, the energy model is taken into account. Furthermore, the coalition game combined with deep Q-learning framework is explored so as to model and incentivize the cooperation and competition process. Theoretical analysis and simulation studies demonstrate that the improved algorithms perform better than the original version, and they can converge to a Nash-stable optimal or asymptotically optimal solution.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.