Nowadays, there is a growing trend towards fog computing which is a relatively new concept that has been introduced as an extension of cloud computing. It is considered as a promising paradigm for intelligent vehicular networks due to its ability to reduce delays and enhance network efficiency. The utilization of fog computing plays a crucial role in tackling the distinct obstacles encountered in vehicular computing, including issues like real-time data processing, and limitations on bandwidth. By relocating computational resources closer to the network’s edge, it enhances the effectiveness, dependability, and safety of vehicular applications while simultaneously boosting privacy and security measures. In addition, fog offloading has played a pivotal role in fog computing and edge computing structures. Its objective is to efficaciously allocate the data and assign the processing of tasks among edge devices, fog nodes, and cloud resources. Efficient fog offloading strategies are adaptable, scalable, and are able to handle faults, making them indispensable for optimizing these architectures. Although efficiency and quality of service are crucial objectives in a fog computing based intelligent vehicular network environment, performance remains a significant concern that cannot be overlooked. In light of this, to evaluate the efficacy of the probabilistic offloading approach on a fog server, a thorough performance evaluation is necessary. This research proposes a fluid queue approach that accounts for the constant flow of data packets while evaluating a fog server’s performance. For a fog computing-based intelligent vehicular network (FCIVN) with numerous heterogeneous smart vehicles (SVs), we construct a multiple-input fluid queue to model the tasks handed over to the fog server in order to evaluate its performance. In an FCIVN, the arrivals are drawn from various sources at the fog server. Accordingly, in the proposed model the fluid queue is modulated by more than one independent and distinct finite-state birth–death processes (BDPs) which control the variable inflow, and another BDP which controls the variable outflow. We present the details concerning an intermediate fog server’s buffer occupancy distribution within an FCIVN. Further, we assess the performance measures in terms of the offered load, expected buffer-level, average throughput and average latency of the tasks in an FCIVN. Finally, quantitative illustrations are presented to demonstrate the appropriateness of the fluid queue model developed in this study. The results are found to be in consistence with the expected behaviour of these performance indicators.
Read full abstract