Abstract

Increase in technological advancements has led to a revolution in vehicular networks. Fog computing is integrated into the existing vehicular networks to form Fog-Enabled Connected Vehicle Networks (FCVN) to increase the latency and throughput. Orchestration schemes are proposed to solve load balancing issues with normal device association schemes. In particular, graph-based orchestration schemes have shown a remarkable increase in service capability. However, solving the orchestration problem with multiple objectives using iterative approaches is difficult and takes time to converge. To solve these problems, we propose an improvement to the existing graph-based orchestration schemes using Reinforcement Learning (RL). A robust reward function is also proposed to tune the importance given to different objectives. The maximum resource reduction possible between every pair of fog node (FN) is computed using RL and is considered as the weight of the link. Then, maximum weight matching is performed on the resultant graph to find optimal pairs of FNs for service migrations. To demonstrate the algorithm, two objectives, energy consumption and service capability is considered. The results reveal our algorithm simultaneously improves both the objectives when compared to the existing graph-based orchestration schemes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.