Abstract

Mobile cloud and edge computing protocols make it possible to offer computationally heavy applications to mobile devices via computational offloading from devices to nearby edge servers or more powerful, but remote, cloud servers. Previous work assumed that computational tasks can be fractionally offloaded at both cloud processor (CP) and at a local edge node (EN) within a conventional Distributed Radio Access Network (D-RAN) that relies on non-cooperative ENs equipped with one-way uplink fronthaul connection to the cloud. In this paper, we propose to integrate collaborative fractional computing across CP and ENs within a Cloud RAN (C-RAN) architecture with finite-capacity two-way fronthaul links. Accordingly, tasks offloaded by a mobile device can be partially carried out at an EN and the CP, with multiple ENs communicating with a common CP to exchange data and computational outcomes while allowing for centralized precoding and decoding. Unlike prior work, we investigate joint optimization of computing and communication resources, including wireless and fronthaul segments, to minimize the end-to-end latency by accounting for a two-way uplink and downlink transmission. The problem is tackled by using fractional programming (FP) and matrix FP. Extensive numerical results validate the performance gain of the proposed architecture as compared to the previously studied D-RAN solution.

Highlights

  • Mobile cloud and edge computing techniques enable computationally heavy applications such as gaming and augmentedS.-H

  • We validate via numerical results the performance gain of the proposed Cloud Radio Access Network (RAN) (C-RAN) architecture as compared to the Distributed Radio Access Network (D-RAN) reference system

  • We have studied the design of collaborative cloud and edge mobile computing within a C-RAN architecture for minimal end-to-end latency

Read more

Summary

INTRODUCTION

To the best of our knowledge, reference [3] for the first time studied the joint optimization of computation and communication resources for mobile wireless edge computing systems, with follow-up works including [4] Both papers [3], [4] aimed at minimizing energy expenditure under constraints on the end-to-end latency that encompass the contributions of both communication and computation. In [21], the authors tackled the optimization of functional split for collaborative computing systems equipped with a packet-based fronthaul network It was assumed in [21] that the physicallayer (PHY) functionalities, which include channel encoding and decoding, are located only at ENs. In [22], the authors addressed the task allocation and traffic path planning problem for a C-RAN system under the assumption that the service latency consists of task processing delay and path delay only on fronthaul links. E[·] represents the expectation operator, and ||x|| denotes the Euclidean 2-norm of a vector x

SYSTEM MODEL
Computational Tasks and Collaborative Computing Model
Wireless Channel Model for Edge Link
OPTIMIZATION FOR THE D-RAN ARCHITECTURE
Orthogonal TDMA
Non-Orthogonal Multiple Access
OPTIMIZATION FOR THE C-RAN ARCHITECTURE
Uplink Communication and Latency
Downlink Communication and Latency
Total End-to-End Latency With C-RAN
NUMERICAL RESULTS
Convergence of the Proposed Algorithm
Performance Gains of the C-RAN Architecture
Performance Gains of Collaborative Cloud-Edge Computing
CONCLUSIONS

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.