Abstract

Cloud gaming or gaming as a service, the newest entry in the online gaming world, leverages the well-known concept of cloud computing to provide real-time gaming services to players. This gaming paradigm provides affordable, flexible, and high performance solutions for end users with constrained computing resources and enables them to play high-end graphic games on low-end thin clients, because it renders everything in the cloud and simply streams the resulting high-quality video to the player. Despite its advantages, cloud gaming’s quality of experience suffers from high and unstable end-to-end delay. According to this fact of cloud gaming, datacenters are in charge of performing complex rendering and video encoding computations, delivering high-quality gaming experience to gamers which requires an efficient and smart resource allocation mechanism to allot resources (e.g., memory and network bandwidth) to gaming sessions consistent with their requirements. In this paper, we propose a bi-objective optimization method to find an optimum path for packet transmission within a data center by minimizing delay and maximizing bandwidth utilization. We use a metaheuristic model, called analytic hierarchy process, to solve the NP-complete optimization problem. The resulting method is an analytic hierarchy process-based game aware routing (AGAR) scheme that considers requested game type and requirements in terms of delay and bandwidth to select the best routing path for a game session in a cloud gaming network. The method executes within a software defined network controller, which affords it a global view of the data center with respect to communication delay and available bandwidth. Simulation results indicate that the AGAR can reduce the end-to-end delay by up to 9.5% compared with three other conventional representative methods: the delay-based Dijkstra, the equal cost multi-path routing, and the Hedera routing algorithms. In addition, we demonstrate that the proposed method assigns game flows to network paths and OpenFlow switches in a balanced manner that prevents potential network bottlenecks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call