Abstract

Recent years have witnessed many immersive media services and applications, ranging from 360-degree video streaming, to augmented and virtual reality (AR and VR) and the recent metaverse experiences. These new applications usually have common features including high fidelity, immersive interaction, and open data exchange between people and the environment. As an emerging paradigm, edge computing has become increasingly ready to support these features. We first show that a key to unleashing the power of edge computing for immersive multimedia is handling AI models and data. Then, we present a framework that enables joint accuracy- and latency-aware edge intelligence, with adaptive deep learning model deployment and data streaming. We show that not only conventional mechanisms such as content placement and rate adaptation, but also the emerging 360-degree and virtual reality streaming can benefit from such edge intelligence.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call