Abstract
With the development of human-machine interactions, users are increasingly evolving towards an immersion experience with multi-dimensional stimuli. Facing this trend, cross-modal collaborative communication is considered an effective technology in the Industrial Internet of Things (IIoT). In this paper, we focus on open issues about resource reuse, pair interactivity, and user assurance in cross-modal collaborative communication to improve quality of service (QoS) and users’ satisfaction. Therefore, we propose a novel architecture of modal-aware resource allocation to solve these contradictions. First, taking all the characteristics of multi-modal into account, we introduce network slices to visualize resource allocation, which is modeled as a Markov Decision Process (MDP). Second, we decompose the problem by the transformation of probabilistic constraint and Lyapunov Optimization. Third, we propose a deep reinforcement learning (DRL) decentralized method in the dynamic environment. Meanwhile, a federated DRL framework is provided to overcome the training limitations of local DRL models. Finally, numerical results demonstrate that our proposed method performs better than other decentralized methods and achieves superiority in cross-modal collaborative communications.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.