DPNVC: A novel density-based probability VANET caching framework built upon the NDN
DPNVC: A novel density-based probability VANET caching framework built upon the NDN
- Research Article
20
- 10.1109/mnet.011.2000663
- Jul 1, 2021
- IEEE Network
With the rapid development of smart city and 5G, user demand for Internet services has increased exponentially. Through collaborative content sharing, the storage limitation of a single edge server (ES) can be broken. However, when mobile users need to download the whole content through multiple regions, independently deciding the caching content for ESs in different regions may result in redundant caching. Furthermore, frequent switching of communication connection during user movement also causes retransmission delay. As a revolutionary approach in the artificial intelligence field, deep reinforcement learning (DRL) has earned great success in solving high-dimensional and network resource management related problems. Therefore, we integrate collaborative caching and DRL to build an intelligent edge caching framework, so as to realize collaborative caching between cloud and ESs. In this caching framework, a fed-erated-machine-learning-based user behavior prediction model is first designed to characterize the content preference and movement trajectory of mobile users. Next, to achieve efficient resource aggregation of ESs, a user-behavior-aware dynamic collaborative caching domain (DCCD) construction and management mechanism is devised to divide ESs into clusters, select cluster heads, and set the re-clustering rules. Then a DRL-based content caching and delivery algorithm is presented to decide the caching content of ESs in a DCCD from a global perspective and plan the transmission path for users, which reduces redundant content and transmission delay. Especially when a user request cannot be satisfied by the current DCCD, a cross-domain content delivery strategy is presented to allow ESs in other DCCDs to provide and forward content to the user, avoiding the traffic pressure and delay caused by requesting services from cloud. The simulation results show that the proposed collaborative caching framework can improve user satisfaction in terms of content hit rate and service delay.
- Conference Article
16
- 10.1145/3106237.3106303
- Aug 21, 2017
Symbolic program analysis techniques rely on satisfiability-checking constraint solvers, while quantitative program analysis techniques rely on model-counting constraint solvers. Hence, the efficiency of satisfiability checking and model counting is crucial for efficiency of modern program analysis techniques. In this paper, we present a constraint caching framework to expedite potentially expensive satisfiability and model-counting queries. Integral to this framework is our new constraint normalization procedure under which the cardinality of the solution set of a constraint, but not necessarily the solution set itself, is preserved. We extend these constraint normalization techniques to string constraints in order to support analysis of string-manipulating code. A group-theoretic framework which generalizes earlier results on constraint normalization is used to express our normalization techniques. We also present a parameterized caching approach where, in addition to storing the result of a model-counting query, we also store a model-counter object in the constraint store that allows us to efficiently recount the number of satisfying models for different maximum bounds. We implement our caching framework in our tool Cashew, which is built as an extension of the Green caching framework, and integrate it with the symbolic execution tool Symbolic PathFinder (SPF) and the model-counting constraint solver ABC. Our experiments show that constraint caching can significantly improve the performance of symbolic and quantitative program analyses. For instance, Cashew can normalize the 10,104 unique constraints in the SMC/Kaluza benchmark down to 394 normal forms, achieve a 10x speedup on the SMC/Kaluza-Big dataset, and an average 3x speedup in our SPF-based side-channel analysis experiments.
- Conference Article
- 10.1145/2771783.2784773
- Jul 13, 2015
Despite the remarkable advances attained by the SMT community in the last decade, solving complex formulas still represents the main bottleneck to the scalability of program analysis techniques. Recent research work has shown that formulas generated during program analysis recur, and such redundancy can be captured and exploited by means of caching frameworks to avoid repeating complex queries to solvers. Although current approaches show that reusing formulas syntactically can indeed reduce the impact of SMT solvers on program analysis, they still suffer from being logic-dependent, and performing poorly on huge sets of heterogenous formulas. The core idea of our approach is to go beyond merely syntactical caching frameworks by designing a caching framework that is able to reuse proofs instead of formulas. In fact, even formulas that are syntactically different can share solutions. We aim to study the recurrence of proofs across heterogeneous formulas, and to define a technique to efficiently retrieve such proofs. We plan to exploit a suitable distance function that measures the amount of proofs shared by two formulas to allow the efficient retrieval of candidate proofs within a potentially large space of proofs. In this paper, we present the problem, draft the core idea, discuss the early results and present our research plans.
- Conference Article
2
- 10.1109/nana.2017.42
- Oct 1, 2017
Virtual machine (VM) disk image plays an important role in VM's whole life cycle, as a container of VM's operating system, application's running environment and data. The performance of VM accessing to disk images will directly affect the performance of applications running in VM and further the entire cloud computing system. According to the use of cache, the existing VM image storage system are divided into three categories, namely: A) no cache framework; B) local cache framework; C) collaborative cache framework. In this paper, the latest research achievements about the VM image storage systems are reviewed. Then, the challenges of VM image management in cloud data center are also discussed. At last, we points out the possible research direction in the future.
- Conference Article
51
- 10.1145/2950290.2950303
- Nov 1, 2016
To help improve the performance of database-centric cloud-based web applications, developers usually use caching frameworks to speed up database accesses. Such caching frameworks require extensive knowledge of the application to operate effectively. However, all too often developers have limited knowledge about the intricate details of their own application. Hence, most developers find configuring caching frameworks a challenging and time-consuming task that requires extensive and scattered code changes. Furthermore, developers may also need to frequently change such configurations to accommodate the ever changing workload.
- Research Article
9
- 10.1109/tvt.2020.3047511
- Dec 28, 2020
- IEEE Transactions on Vehicular Technology
An effective method for supporting the large volume of information required for future vehicular networks is leveraging caching techniques as well as relying on millimeter-wave (mmWave) frequencies. However, characterizing such a system under mmWave directional beamforming and vehicular mobility is a complex task. In this article, we propose the first stochastic geometry framework for V2X caching in mmWave networks. In addition to common parameters considered in stochastic geometry models, our derivations account for caching as well as the speed and the trajectory of the vehicles. Furthermore, our evaluations provide interesting design insights: $ (i)$ higher base station/vehicle densities does not necessarily improve caching performance; $ (ii)$ although using a narrower beam leads to a higher SINR, it also reduces the connectivity probability; and $ (iii)$ V2X caching can be an inexpensive way of compensating some of the unwanted mmWave channel characteristics.
- Research Article
27
- 10.1016/j.peva.2017.04.006
- May 18, 2017
- Performance Evaluation
Caching games between Content Providers and Internet Service Providers
- Conference Article
- 10.4108/eai.25-10-2016.2266632
- Nov 29, 2016
We consider a scenario where an Internet Service Provider (ISP) serves users that choose digital content among M Content Providers (CP). In the status quo, these users pay both access fees to the ISP and content fees to each chosen CP; however, neither the ISP nor the CPs share their profit. We revisit this model by introducing a different business model where the ISP and the CP may have motivation to collaborate in the framework of caching. The key idea is that the ISP deploys a cache for a CP provided that they share both the deployment cost and the additional profit that arises due to caching. Under the prism of coalitional games, our contributions include the application of the Shap-ley value for a fair splitting of the profit, the stability analysis of the coalition and the derivation of closed-form formulas for the optimal caching policy. Our model captures not only the case of non-overlapping contents among the CPs, but also the more challenging case of overlapping contents; for the latter case, a non-cooperative game among the CPs is introduced and analyzed to capture the negative externality on the demand of a particular CP when caches for other CPs are deployed.
- Conference Article
3
- 10.1145/3241539.3267752
- Oct 15, 2018
We describe the current version of Wi-Cache, a SDN framework for caching at the WiFi edge. Wi-Cache is motivated by the belief that edge caching technologies are needed to augment emerging network technologies to meet the increasing (volume, quality, and variety) demand for content, which is itself changing its characteristics significantly. Wi-Cache is being used to test new ideas for edge caching. Specifically, Wi-Cache is a framework for edge caching which allows caching and delivery of content on WiFi APs. Apart from a network induced handoff of clients, it allows communication between the APs for content delivery. We have also developed an API that is exposed for implementation of algorithms for content delivery and placement, and cache replacement.
- Conference Article
16
- 10.1109/ccgrid.2017.41
- May 1, 2017
Due to its simplicity and scalability, MapReduce has become a de facto standard computing model for big data processing. Since the original MapReduce model was only appropriate for embarrassingly parallel batch processing, many follow-up studies have focused on improving the efficiency and performance of the model. Spark follows one of these recent trends by providing in-memory processing capability to reduce slow disk I/O for iterative computing tasks. However, the acceleration of Spark's in-memory processing using graphics processing units (GPUs) is challenging due to its deep memory hierarchy and host-to-GPU communication overhead. In this paper, we introduce a novel GPU-accelerated MapReduce framework that extends Spark's in-memory processing so that iterative computing is performed only in the GPU memory. Having discovered that the main bottleneck in the current Spark system for GPU computing is data communication on a Java virtual machine, we propose a modification of the current Spark implementation to bypass expensive data management for iterative task offloading to GPUs. We also propose a novel GPU in-memory processing and caching framework that minimizes host-to-GPU communication via lazy evaluation and reuses GPU memory over multiple mapper executions. The proposed system employs message-passing interface (MPI)-based data synchronization for inter-worker communication so that more complicated iterative computing tasks, such as iterative numerical solvers, can be efficiently handled. We demonstrate the performance of our system in terms of several iterative computing tasks in big data processing applications, including machine learning and scientific computing. We achieved up to 50 times speed up over conventional Spark and about 10 times speed up over GPU-accelerated Spark.
- Book Chapter
- 10.1007/978-3-030-11404-6_8
- Jan 1, 2019
Cloud service providers augment a SQL database management system with a cache to enhance system performance for workloads that exhibit a high read to write ratio. These in-memory caches provide a simple programming interface such as get, put, and delete. Using their software architecture, different caching frameworks can be categorized into Client-Server (CS) and Shared Address Space (SAS) systems. Example CS caches are memcached and Redis. Example SAS caches are Java Cache standard and its Google Guava implementation, Terracotta BigMemory and KOSAR. How do CS and SAS architectures compare with one another and what are their tradeoffs? This study quantifies an answer using BG, a benchmark for interactive social networking actions. In general, obtained results show SAS provides a higher performance with write policies playing an important role.
- Conference Article
6
- 10.1109/icc.2011.5962882
- Jun 1, 2011
Large Peer-to-Peer (P2P) systems for file transfer exhibit the presence of communities based on semantic, geographic, or organizational interests of users. Generally, resources commonly shared within individual communities are relatively unpopular and inconspicuous in the system-wide behavior. These communities are unable to benefit significantly from performance enhancement schemes such as caching that focus only on the most dominant queries. We propose a generic caching framework that enhances lookup performance of individual communities while providing even better performance to the dominant communities. The caching framework can be used with any structured P2P system that provides alternative paths to a given destination. Furthermore, the solution is adaptive to changing popularity and user interests, works with any skewed distribution of queries, needs small caches, utilizes local statistics, and introduces minimal modifications and overhead to the overlay network. Simulations based on Chord overlay show 40% reduction in average path length with individual communities indicating three times improvement in performance over system-wide caching.
- Research Article
4
- 10.1109/tii.2022.3162306
- Sep 1, 2022
- IEEE Transactions on Industrial Informatics
N OWADAYS, billions of devices are connected to the In-ternet, enabling Internet of Things (IoT) systems widely deployed, such as smart city, smart healthcare and intelligent plant, to capture a great quantity of sensing data. Consequently, the data transmission, processing and analysis in IoT applications bring a great pressure to the central server. Fortunately, distributed intelligence becomes one of the potential solutions. Distributed intelligence can greatly relieve server pressures via plenty of terminal devices, and these devices collaboratively perceive and handle the mass data to improve the reliability, s-calability and security of industrial IoT systems. As future IoT system will embrace more wireless sensors and devices, the high-performance computing, high-bandwidth and low-latency communication are excessively required, many new research opportunities and challenges for distributed intelligence over Internet of things have arisen. To promote the development of distributed intelligence technology, this special section (SS) focuses on various technologies and platforms regarding industrial IoT systems. This special section received nearly 50 submitted manuscripts, out of which 10 of them have been accepted after a rigorous peer review. Each manuscript is reviewed by multiple rounds of review with at least three or four reviewers, the problems to be solved and the innovation of each manuscript are mainly concerned. Then the accepted papers are summarized as follows in details. Considering the joint optimization of the offloading decision and resource allocation under limited resource constraints in collaborative edge computing networks with multiple IIoT devices and MEC servers, an improved differential evolution algorithm [7] is proposed to minimize the weighted sum of cost of energy consumption and time delay, which can effectively reduce the system delay and energy consumption. In order to improve the performance of task scheduling in cloud computing, Attiya et al. [1] propose a novel hybrid swarm intelligence method MRFOSSA, which uses a modified Manta-Ray Foraging Optimizer (MRFO) and the Salp Swarm Algorithm (SSA). MRFOSSA is superior to other methods in terms of makespan time and cloud throughput. The research goal of the paper [5] is to design an intelligent computing offloading strategy for industrial applications in order to optimize costs and mitigate energy losses. Then the paper proposes to combine a fog controller and AI-based learning techniques so that the fog controller can intelligently assign tasks to the most appropriate fog devices and find the appropriate path to the target. Considering the resource utilization efficiency under dynamic overload requests and network states in IIoT, Chen et al. [2] propose DRL-based intelligent SFC orchestration scheme and jointly optimize the VNF deployment and SFC embedment by the improved DDQN algorithm, which can improve the performance of resource utilization rate, execution cost and delay compared with other representative schemes. To solve the problem of resource allocation and energy cost in Internet of Vehicles, Kong et al. [8] design a joint computing and caching framework and formulate the problem as a reinforcement learning problem to minimize the energy cost. On this basis, the optimization algorithm based on DDPG is proposed, which can effectively decrease energy costs. To reduce the query numbers of the object model when constructing adversarial examples, Zhang et al. [10] propose generating adversarial examples with shadow model (GASM), i.e., transfering the query operations to the designed shadow model, which can achieve high attack success rates. Chen et al. [3] revise a Decentralized-Wireless-Federated-Learning algorithm (DWFL) which utilizes the superposition property of the analog scheme. It can solve the problem of single failure, limited bandwidth resource and privacy protection in wireless federated learning algorithm, which can be applied widely in wireless IoT networks. To reduce the resource consumption in CNN-based applications, Jia et al. [6] propose the CNN-based Resource Optimization APProach which utilizes model compression and computation sharing to optimize inner-model and inter-model respectively, and the comparison results show the superior performance in scalability and the decrease of resource cost. In mobile crowdsensing activities, Gao et al. [4] propose a differential Location Privacy-preserving Mechanism based on Trajectory obfuscation (LPMT) to protect the location privacy of mobile users, which includes three operations: stay points extraction, stay points obfuscation and stay points sampling. In order to mimic the task-free bottom-up visual attention process by predicting salient regions on natural images, Umer et al. [9] propose a Pseudo Knowledge Distillation (PKD) model based on knowledge distillation and pseudo labelling technique, which is computationally efficient and suitable for real-time on-device saliency prediction.
- Research Article
9
- 10.1016/j.suscom.2021.100555
- Mar 30, 2021
- Sustainable Computing: Informatics and Systems
Decentralized adaptive resource-aware computation offloading & caching for multi-access edge computing networks
- Research Article
- 10.1038/s41598-025-26079-w
- Nov 26, 2025
- Scientific Reports
With the explosive growth of short video traffic and the increasing demand for low-latency content delivery, efficient edge caching strategies have become critical for mobile networks. However, the highly dynamic and personalized characteristics of short video services present substantial challenges for traditional caching approaches, which often depend exclusively on static popularity metrics. This paper proposes DECC (Dynamic Edge-caching through Content Popularity and Crowd Prediction), a novel caching framework that jointly models content popularity and user access behavior to optimize caching decisions at edge nodes. DECC integrates a hybrid deep learning architecture comprising 1D Convolutional Neural Networks (Conv1D), Long Short-Term Memory (LSTM) networks, and Gated Recurrent Units (GRU) to capture the temporal dynamics of both video requests and user activity. A fusion mechanism is introduced to generate cache priority scores based on dual-path predictions, enabling more accurate and adaptive content placement. Experimental evaluations conducted on real-world datasets demonstrate that DECC consistently surpasses baseline methods in cache hit rate, access latency reduction, and overall resource utilization efficiency. These results highlight the potential of DECC as a scalable and intelligent caching solution for next-generation short video edge services.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.