Abstract

SummaryThis paper proposes a prediction‐based scaling and placement of service function chains (SFCs) to improve service level agreement (SLA) and reduce operation cost. We used a variant of recurrent neural network (RNN) called gated recurrent unit (GRU) for resource demand prediction. Then, considering these predictions, we built an intuitive scale in/out algorithm. We also developed an algorithm that applies Q‐Learning on Edge computing environment (EdgeQL) to place these scaled‐out VNFs in appropriate locations. The integrated algorithm that combines prediction, scaling, and placement are called RNN‐EdgeQL. RNN‐EdgeQL (v2) is further improved to achieve application agnostic group level elasticity in the chain, independent of applications installed on the VNFs. We tested our algorithm on two realistic temporal dynamic load models including Internet traffic (Abilene) and an application specific traffic (Wiki) on an OpenStack testbed. The contribution of this article is threefold. First, prediction model prepares the target SFC for the upcoming load. Second, an application agnostic characteristics of the algorithm achieves the group‐level elasticity in SFC. Finally, the EdgeQL placement model minimizes the end‐to‐end path of an SFC in multi‐access edge computing (MEC) environment. As a result, RNN‐EdgeQL (v2) gives the lowest overall latency, lowest SLA violations, and lowest VNFs requirement, compared to RNN‐EdgeQL (v1) and Threshold‐Openstack default placement.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call