Abstract

Cloud native computing paradigm allows microservice-based applications to take advantage of cloud infrastructure in a scalable, reusable, and interoperable way. However, in a cloud native system, the vast number of configuration parameters and highly granular resource allocation policies can significantly impact the performance and deployment cost. For understanding and analyzing these implications in an easy, quick, and cost-effective way, we present PerfSim, a discrete-event simulator for approximating and predicting the performance of cloud native service chains in user-defined scenarios. To this end, we proposed a systematic approach for modeling the performance of microservices endpoint functions by collecting and analyzing their performance and network traces. With a combination of the extracted models and user-defined scenarios, PerfSim can then simulate the performance behavior of all services over a given period and provide an approximation for system KPIs, such as requests' average response time. Using the processing power of a single laptop, we evaluated both simulation accuracy and speed of PerfSim in 104 prevalent scenarios and compared the simulation results with the identical deployment in a real Kubernetes cluster. We achieved ~81-99% simulation accuracy in approximating the average response time of incoming requests and ~16-1200 times speed-up factor for the simulation.

Highlights

  • C Loud Native Computing is an emerging paradigm of distributed computing that “empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid cloud” [1]

  • To mitigate this problem and prevent any possible noise affecting the experiments results, we isolated the entire testbed by locating hosts in a separate rack and making sure no other hosts are connected to the routers

  • We presented PerfSim as a systematic method and simulation platform for modeling and simulating the performance of large-scale service chains in the context of cloud native computing

Read more

Summary

Introduction

C Loud Native Computing is an emerging paradigm of distributed computing that “empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid cloud” [1]. Amongst the main advantages of cloud native computing is the possibility for allocating highly granular resources to large-scale chains of services in a cluster. By the rising trend of cloud native computing in containerized cluster environments, many researchers are nurturing new Analyzing the performance behavior of a service chain in a real testbed gives the most reliable results. In most performance optimization techniques, various scenarios need to be evaluated in a timely manner to eventually minimize a cost function. Performing such evaluations in a real testbed, while providing accurate results, imposes a dramatic burden for achieving a scalable and efficient optimization method

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call