Abstract

In this era of big and fast data, software architects tend to find it really hard to make consistent decisions about which architecture and technologies are ideal for a certain business need. It is even harder to make them while dealing with the scarcity of clear methodologies, best practices and reference architectures. In this prospect, architecture evaluation through benchmarking can be of great interest, as it enables the detection of performance anomalies or bottlenecks as you go. The problem when talking about Big Data benchmarking, is that existing solutions remain technology-related, and do not deal with the heterogeneous aspect of complex architectures. In addition to that, businesses are in general dealing with multi-layered complex systems, involving various technologies, paradigms and micro-architectures. This means that the benchmarking solution must be able to give fine-grained insights about each of the layers. A successful benchmarking system must also be seamless, easy to use, scalable, and preferably cloud native. To satisfy these requirements, we designed and implemented Babel, a generic Big Data benchmarking platform, that insures an end-to-end performance evaluation and monitoring. We present in this paper the principles, architecture, integration and deployment steps of Babel.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call