Abstract

Recently, there has been increasing interest of deploying computation-intensive and rich-media applications on mobile devices, and ultra-low latency has become an important requirement to achieve high user QoE. However, conventional mobile communication systems are incapable of providing considerable communication and computation resources to support low latency. Although 5G is expected to effectively increase communication capacity, it is difficult to achieve ultra-low end-to-end delay for the ever growing number of cognitive applications. To address this issue, this article first proposes a novel network architecture using a resource cognitive engine and data engine. The resource cognitive intelligence, based on the learning of network contexts, is aimed at a global view of computing, caching, and communication resources in the network. The data cognitive intelligence, based on data analytics, is critical for the provisioning of personalized and smart services toward specific domains. Then we introduce an optimal caching strategy for the small-cell cloud and the macro-cell cloud. Experimental results demonstrate the effectiveness of the proposed caching strategy, and its latency is lower than that of the two conventional approaches, that is, the popular caching strategy and the greedy caching strategy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.