Abstract
Since mobile devices typically have limited computation resources, offloading computation tasks to fog access points (F-APs) is a promising approach to support delay-sensitive and computation-intensive applications. This paper considers joint computation and communication resource allocation for multiuser multi-server systems, which aims to maximize the number of users being served and minimize the total energy consumption subject to delay tolerance constraints. The joint computation and communication resource allocation problem is solved optimally for both non-orthogonal multiple access (NOMA) and orthogonal multiple access (OMA) schemes. The joint user pairing and fog access point assignment problem for NOMA is proved to be NP-hard. For both NOMA and OMA, heuristic and optimal algorithms based on graph matching are designed. The optimal algorithms, though of high complexity, allow NOMA and OMA to be compared at their full potential and serve as benchmarks for evaluating the heuristic algorithms. Simulation results show that NOMA significantly outperforms OMA in terms of outage probability and energy consumption, especially for tight delay tolerance constraints and large computational tasks. Simulation results also demonstrate that our proposed NOMA and OMA schemes significantly outperform the swap-enabled matching algorithm widely used in the literature.
Highlights
M ODERN mobile applications, such as augmented reality, face recognition, assisted driving, and interactive gaming, are increasingly computation-intensive and latencycritical [1], [2]
Simulation results demonstrate that our proposed non-orthogonal multiple access (NOMA) and orthogonal multiple access (OMA) schemes significantly outperform the swap-enabled matching and stable matching algorithms widely used in the literature
We consider a single cell, in which the fog access points (F-APs) and users are randomly distributed inside the cell according to the homogeneous Poisson point process (PPP)
Summary
M ODERN mobile applications, such as augmented reality, face recognition, assisted driving, and interactive gaming, are increasingly computation-intensive and latencycritical [1], [2]. To meet the latency requirement of those applications with a limited power budget, fog radio access network (F-RAN) is a promising network architecture to use. It consists of a cloud server equipped with high storage, computing, and signal processing capabilities [6]. The cloud is connected to densely deployed fog access points (F-APs) via wireless fronthaul links allowing joint processing and cooperation among multiple FAPs. The F-APs are equipped with caching and computing capabilities to bring the network functions close to mobile users. By densely deploying fog access points (F-APs) with high computing capabilities at the network edge, mobile users can offload their computation tasks to nearby F-APs for fast execution and battery power saving [7], [8]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.