Queueing delays are one very important performance measure for most engineering network systems. Providing low-delay systems is a major goal of service providers, as it is a leading concern for users/customers. These network systems and their performance measures are typically analyzed using queueing-based models. Even though there are several available strong and precise mathematical models for analyzing queueing systems, their applications are limited to simple and small-scale systems due to their lack of scalability in real-life systems. Researchers have spent a good portion of their efforts toward perfecting the analysis of such systems. Precise and accurate results are available for single-node systems with standard operations. However, for analyzing multi-node systems with complex operations, one has to resort to approximations or simulations. Some of these approximations usually give an oversimplified view of such systems; these approximations remain quite limited. In this paper, we present a machine learning tool that can potentially be used to analyze most finite buffer queues to obtain reasonable approximations for the mean number of items in such systems. The machine learning tool we develop is based on supervised learning using the Michaelis–Menten non-linear model used in biochemistry and the results are simple to obtain. It is fast and very scalable; these characteristics represent the main features of this approach compared to existing systems. The coefficient of determination R2 for all the examples presented are all higher than 90%, with some as high as 99.6%.
Read full abstract