Abstract

Cross-Modal services, including audio, video, and haptic signals, have gradually been the core components of multimedia applications. Unfortunately, owing to stringent transmission requirements of haptic signals and varying, even conflicting, communication qualities among these heterogeneous streams, how to ensure concurrent cross-modal streaming transmission has been the significant technical challenge. To get over this dilemma, this work proposes an edge intelli-gence-empowered cross-modal streaming transmission architecture, which takes full advantage of communication, caching, computation, and control capabilities (4C). In this architecture, we first introduce artificial intelligence (AI) into 4C for further performance improvement, including secure communication, efficient caching, and collaborative computation. Then, the highlight of this work lies in deriving a control model for the joint optimization problem formulation of communication, caching, and computation, which aims to enable the architecture to be adaptive to dynamic network conditions, various service scenarios, and heterogeneous streams. Finally, we explore the autonomous transmission decision for this problem through attention-based deep reinforcement learning (A-DRL). Importantly, experimental results validate the efficiency of the proposed cross-modal streaming transmission architecture.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.