Joint Service Scheduling and Content Caching Over Unreliable Channels
To alleviate the ever-increasing data demands, edge caching plays a crucial role in improving the performance of system, especially in data-intensive applications. Previous works mainly focus the caching policy over reliable channels. For unreliable channel scenarios, the system performance is jointly affected by the user preference and the channel reliability, whereas both the user preference and the reliability are unknown commonly. A high retrieval cost may be incurred on unreliable channels even when the requested content is in the nearby cache. To solve the issues mentioned above, we jointly optimize the service scheduling policy and the content caching policy in this paper. We propose a maximal reward priority (MRP) policy to serve user requests, and a collaborative multi-agent actor critic (CMA-AC) policy to update the local cache. Simulation results show that the proposed MRP policy outperforms the shortest distance priority (SDP) policy [4]. And the proposed CMA-AC policy obtains a better performance compared with a distributed multi-agent deep Q-network (DMA-DQN) policy, especially when the number of contents and the capacity of local cache are large. Furthermore, the proposed CMA-AC policy is robust.
- Conference Article
3
- 10.1109/icc40277.2020.9148620
- Jun 1, 2020
To reduce content transmission power and network load pressure, content caching technology based on a large number of small base stations (SBSs) is considered to be an effective solution. However, due to the limited cache capacity and unknown content popularity, how to design an intelligent content caching policy has become a great challenge. In this paper, we propose a generative adversarial network (GAN) based on the distributional deep Q-Network (DDQN) algorithm, named QGAN, to learn the content caching policy. A content caching network that contains several cooperative SBSs is considered in the case of unknown content popularity, where each SBS fetches cached content from the adjacent SBS or cloud. Moreover, compared with three classical content caching policies and one reinforcement learning algorithm, the performance of the QGAN algorithm is verified. The simulation results show that the convergence rate is improved and the transmission cost is reduced with the proposed algorithm.
- Conference Article
- 10.1109/pimrc.2018.8580959
- Sep 1, 2018
Caching the popular contents near users is an effective way to release the burden of the wireless networks and reduce the energy consumption of content service for delivery. In this paper, we consider a distributed way to cache content with different user preference in heterogeneous cellular networks (HetNets). Aiming at minimizing the energy consumption of the whole network, a joint content cache and delivery optimization problem is proposed. Considering the coupling multiplicative variables, we utilize the alternative optimization (AO) algorithm to decompose the original problem into the content cache and delivery problems, which are solved separately through the knapsack solution and message passing (MP) algorithm. Numerical results reveal that the proposed scheme achieves more in energy saving than conventional schemes.
- Conference Article
8
- 10.1145/2089016.2089035
- Nov 9, 2011
In this paper we address two of the problems, namely routing and content caching. For the routing problem, we introduce the Potential Based Routing (PBR) which provides not only availability but also diversity and adaptability. In addition, we examine three caching policies to select a possible candidate for ICN. The integrated system of both PBR and a content caching policy is called the Cache Aware Target idenTiŒcation (CATT). We present some simulation results to evaluate its performance.
- Research Article
52
- 10.1109/tits.2020.3043593
- Jan 13, 2021
- IEEE Transactions on Intelligent Transportation Systems
Today, with the worldwide offer and rapid increment in multimedia applications on the web, the demands of users to get them accessed are also increasing prominently. The users in vehicular environment too expect efficient multimedia streaming while travelling on the road. However, the high mobility of vehicles as well as the limited transmission range of infrastructure components in IP based network provides low performance by offering high delay and additional network overhead. To provide better Quality of Experience (QoE) with high performance, Information Centric Networking (ICN) is blended with vehicular environment. Caching the content inside network nodes is inherent feature of ICN with various associated benefits such as low content retrieval delay, less network traffic, path reduction and so on. However, challenges still exists for caching the content due to resource constrained network environment (such as limited cache capacity, node battery) as well as for secure delivery of cached data. To solve these challenges and to enhance network performance, we propose a cooperative caching scheme in hierarchical network architecture that jointly considers cache location as well as combined content popularity and predicted future rating score while making caching decision. The proposed approach uses two layer hierarchical architecture where nodes in edge layer are divided into clusters. The proposed scheme uses modified Weighted Clustering Algorithms (WCA) for selection of cluster heads which are then used to decide cache location. A probability matrix is used to compute content caching probability which considers both popularity and predicted future rating of content. The proposed approach dynamically predict the user's preferences using non-negative matrix factorization (NMF) - a machine learning technique which eventually provides prediction of future rating. Based on the selection of both cache location and content to cache, the proposed scheme can effectively cache the content in the network. Further, to deal with the secure delivery of cached content, this work supports legitimate user authorization at edge nodes. The performance of the proposed scheme is evaluated in MATLAB parallel computing toolkit. The results prove significant caching improvement in terms of cache hit, hop reduction and average delay using our proposed scheme.
- Research Article
76
- 10.1109/access.2019.2916314
- Jan 1, 2019
- IEEE Access
To address the drastic growth of data traffic dominated by streaming of video-on-demand files, mobile edge caching/computing (MEC) can be exploited to develop intelligent content caching at mobile network edges to alleviate redundant traffic and improve content delivery efficiency. Under the MEC architecture, content providers (CPs) can deploy popular video files at MEC servers to improve users' quality of experience (QoE). Designing an efficient content caching policy is crucial for CPs due to the content dynamics, unknown spatial-temporal traffic demands, and limited service capacity. The knowledge of users' preference is very useful and important for efficient content caching, yet often unavailable in advance. Under this circumstance, machine learning can be used to learn the users' preference based on historical demand information and decide the video files to be cached at the MEC servers. In this paper, we propose a multi-agent reinforcement learning (MARL)-based cooperative content caching policy for the MEC architecture when the users' preference is unknown and only the historical content demands can be observed. We formulate the cooperative content caching problem as a multi-agent multi-armed bandit problem and propose a MARL-based algorithm to solve the problem. The simulation experiments are conducted based on a real dataset from MovieLens and the numerical results show that the proposed MARL-based cooperative content caching scheme can significantly reduce content downloading latency and improve content cache hit rate when compared with other popular caching schemes.
- Research Article
16
- 10.1016/j.tra.2020.08.005
- Aug 31, 2020
- Transportation Research Part A: Policy and Practice
An efficient caching policy for content retrieval in autonomous connected vehicles
- Conference Article
5
- 10.1109/wcnc49053.2021.9417284
- Mar 29, 2021
With the rapid growth of mobile devices and applications, fog radio access network (F-RAN) has been proposed as a promising paradigm for the 5th generation of mobile networks. Meanwhile, F-RAN can be further enhanced by edge caching underpinned by device-to-device (D2D) communications, where the download delay can be reduced by edge caching users (EUs) providing content though D2D links to content requesting users (RUs). In this paper, to mitigate the impact of limited storage at each EU, we propose an EU classification based content caching policy for F-RAN, where the EUs are divided into two groups, each group caching a different content set. To maximize the cache hit probability at EUs, we present a caching policy selection algorithm that allows the fog radio access point (F-AP) to choose between the proposed EU classification based content caching policy and the conventional probability based caching policy. By modeling the content request queue at each EU as an independent M/D/1 queue model at EUs, we analyze the average download delay under the proposed caching policy selection algorithm. The simulation results show that the proposed algorithm can significantly increase the cache hit probability of EUs and can also reduce the average download delay for RUs.
- Research Article
7
- 10.1109/tnet.2020.3015474
- Aug 19, 2020
- IEEE/ACM Transactions on Networking
Least-recently-used (LRU) caching and its variants have conventionally been used as a fundamental and critical method to ensure fast and efficient data access in computer and communication systems. Emerging data-intensive applications over unreliable channels, e.g., mobile edge computing and wireless content delivery networks, have imposed new challenges in optimizing LRU caching in environments prone to failures. Most existing studies focus on reliable channels, e.g., on wired Web servers and within data centers, which have already yielded good insights and successful algorithms. Surprisingly, we show that these insights do not necessarily hold true for unreliable channels. We consider a single-hop multi-cache distributed system with data items being dispatched by random hashing. The objective is to design efficient cache organization and data placement that minimize the miss probability. The former allocates the total memory space to each of the involved caches. The latter decides data routing and replication strategies. Analytically, we characterize the asymptotic miss probabilities for unreliable LRU caches, and optimize the system design. Remarkably, these results sometimes are counterintuitive, differing from the ones obtained for reliable caches. We discover an interesting phenomenon: allocating the cache space unequally can achieve a better performance, even when channel reliability levels are equal. In addition, we prove that splitting the total cache space into separate LRU caches can achieve a lower asymptotic miss probability than organizing the total space in a single LRU cache. These results provide new and even counterintuitive insights that motivate novel designs for caching systems over unreliable channels.
- Conference Article
12
- 10.1109/icc.2016.7511348
- May 1, 2016
Wireless caching has been used to improve network performance and reduce bandwidth and energy consumption. In this paper, we study the issue of joint admission control and content caching for wireless access points with energy harvesting capability. Given limited energy supply, the access points, in a competitive environment, aim to maximize their payoff defined in terms of revenue by optimizing their admission control and content caching policy. Moreover, the throughput of the content transmission by the access point has to be maintained above a certain threshold. Thus, we propose a constrained stochastic game to model this competitive caching scenario. The equilibrium policy, which is a mapping from the energy, cache, and demand states to the action, is obtained from the model. From the performance evaluation, the joint admission control and content caching policy can achieve significantly better performance than that of the baseline schemes, especially when the energy harvesting rate becomes constricted.
- Research Article
21
- 10.1016/j.jnca.2019.102467
- Nov 15, 2019
- Journal of Network and Computer Applications
Optimal caching policy for wireless content delivery in D2D networks
- Research Article
113
- 10.1109/tcomm.2018.2863364
- Dec 1, 2018
- IEEE Transactions on Communications
Prior works in a designing caching policy do not distinguish content popularity with user preference. In this paper, we illustrate the caching gain by exploiting individual user behavior in sending requests. After showing the connection between the two concepts, we provide a model for synthesizing user preference from content popularity. We then optimize the caching policy with the knowledge of user preference and activity level to maximize the offloading probability for cache-enabled device-to-device communications, and develop a low-complexity algorithm to find the solution. In order to learn user preference, we model the user request behavior resorting to probabilistic latent semantic analysis, and learn the model parameters by the expectation maximization algorithm. By analyzing a Movielens data set, we find that the user preferences are less similar, and the activity level and topic preference of each user change slowly over time. Based on this observation, we introduce a prior knowledge-based learning algorithm for user preference, which can shorten the learning time. Simulation results show a remarkable performance gain of the caching policy with user preference over existing policy with content popularity, both with realistic data set and synthetic data validated by the real data set.
- Conference Article
6
- 10.1109/infocom.2019.8737363
- Apr 1, 2019
Least-recently-used (LRU) caching and its variants have conventionally been used as a fundamental and critical method to ensure fast and efficient data access in computer and communication systems. Emerging data-intensive applications over unreliable channels, e.g., mobile edge computing and wireless content delivery networks, have imposed new challenges in optimizing LRU caching systems in environments prone to failures. Most existing studies focus on reliable channels, e.g., on wired Web servers and within data centers, which have already yielded good insights with successful algorithms on how to reduce cache miss ratios. Surprisingly, we show that these widely held insights do not necessarily hold true for unreliable channels. We consider a single-hop multi-cache distributed system with data items being dispatched by random hashing. The objective is to achieve efficient cache organization and data placement. The former allocates the total memory space to each of the involved caches. The latter decides data routing strategies and data replication schemes. Analytically we characterize the unreliable LRU caches by explicitly deriving their asymptotic miss probabilities. Based on these results, we optimize the system design. Remarkably, these results sometimes are counterintuitive, differing from the ones obtained for reliable caches. We discover an interesting phenomenon: asymmetric cache organization is optimal even for symmetric channels. Specifically, even when channel unreliability probabilities are equal, allocating the cache spaces unequally can achieve a better performance. We also propose an explicit unequal allocation policy that outperforms the equal allocation. In addition, we prove that splitting the total cache space into separate LRU caches can achieve a lower asymptotic miss probability than resource pooling that organizes the total space in a single LRU cache. These results provide new and even counterintuitive insights that motivate novel designs for caching systems over unreliable channels. They can potentially be exploited to further improve the system performance in real practice.
- Conference Article
- 10.1109/wcnc45663.2020.9120754
- May 1, 2020
User fairness is an important metric for cellular systems. It has been widely considered for wireless transmission when optimizing radio resource allocation but rarely considered for femto-caching. In this paper, we optimize caching and bandwidth allocation policies to improve long-term user fairness during content placement and content delivery by harnessing heterogeneous user preference. To this end, we maximize the minimal average data rate, where the average is taken over large-and small-scale channel gains as well as individual user requests. This gives rise to a complicated two-timescale optimization problem involving functional optimization. The objective function of the problem does not have closed-form expression due to unknown user preference and channel distributions, and the “variables” to be optimized include a function. To solve such a challenging problem, we first optimize bandwidth allocation policy given arbitrary caching policy, user locations and user requests, whose structure can be found. We next optimize the caching policy given the optimized bandwidth allocation policy. To handle the difficulty of unknown distributions, we resort to stochastic optimization. Simulation results show that optimizing caching policy exploiting user preference can support much higher minimal average rate than optimizing caching policy based on content popularity when user preferences are less similar. Besides, better user fairness can be achieved by optimizing caching policy than by optimizing bandwidth allocation.
- Research Article
33
- 10.1109/tvt.2015.2397862
- Feb 1, 2016
- IEEE Transactions on Vehicular Technology
Accompanying the increasing interest in vehicle ad hoc networks (VANETs), there is a request for high-quality and real-time video streaming on a VANET, for safety and infotainment applications. Video streaming on a VANET faces extra issues, in comparison with the video streaming on a mobile ad hoc network (MANET), such as the highly dynamic topology. However, there are also benefits to VANETs, such as large buffer and battery capacity, predictable motion of vehicles, and powerful central and graphic processing units (CPU and GPU, respectively). However, the high packet loss ratio of a VANET is a critical issue for high-quality video streaming. In this paper, we propose an error recovery process for high-quality and real-time video streaming in a VANET, which is call multichannel error recovery video streaming (MERVS). MERVS transmits the video through two different channels: a reliable channel and an unreliable channel. Because of the importance of the intraframes (I-frames) in terms of video quality, I-frames will be transmitted through the reliable channel. The interframes will be transmitted through the unreliable channel because of the limited resource of the reliable channel. The priority queue, quick start, and scalable reliable channel (SRC) techniques are also integrated to improve the delay of MERVS. Based on the conducted simulation results, MERVS can provide higher quality video streaming compared with forward error correction (FEC) with similar time delay compared with the real-time transport protocol/user datagram protocol (RTP/UDP) in a VANET.
- Conference Article
53
- 10.1109/vtcspring.2017.8108572
- Jun 1, 2017
Cache-enabled device-to-device (D2D) communications can boost network throughput. By pre-downloading contents to local caches of users, the content requested by a user can be transmitted via D2D links by other users in proximity. Prior works optimize the caching policy at users with the knowledge of content popularity, defined as the probability distribution that each file in a library is requested by all users. However, content popularity can not reflect the interest of each individual user and thus existing caching policy based on popularity may not fully capture the performance gain introduced by caching. In this paper, we optimize caching policy for cache-enabled D2D by learning user preference, which is defined as the conditional probability distribution of a user's request given that the user sends a request. We first formulate an optimization problem with given user preference to maximize the offloading probability, which is proved as NP-hard, and then provide a greedy algorithm to find the solution. In order to predict the preference of each individual user, we model the user request behavior by probabilistic latent semantic analysis (pLSA), and then apply expectation maximization (EM) algorithm to estimate the model parameters. Simulation results show that using pLSA can learn user preference quickly. Compared to existing caching policy exploiting content popularity, the offloading gain achieved by the proposed policy can be remarkably improved even with predicted user preference.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.