Abstract

Event processing involves continuous evaluation of queries over streams of events. Response-time optimization is traditionally done over a fixed set of nodes and/or by using metrics measured at query-operator levels. Cloud computing makes it easy to acquire and release computing nodes as required. Leveraging this flexibility, we propose a novel, queueing-theory-based approach for meeting specified response-time targets against fluctuating event arrival rates by drawing only the necessary amount of computing resources from a cloud platform. In the proposed approach, the entire processing engine of a distinct query is modelled as an atomic unit for predicting response times. Several such units hosted on a single node are modelled as a multiple class M/G/1 system. These aspects eliminate intrusive, low-level performance measurements at run-time, and also offer portability and scalability. Using model-based predictions, cloud resources are efficiently used to meet response-time targets. The efficacy of the approach is demonstrated through cloud-based experiments.

Highlights

  • Event processing is characterized by the continuous processing of streamed data tuples or events in order to evaluate, in a timely manner, the queries deployed by decision support systems

  • On the basis of these observations, we model an event processing network (EPN) as a single queue ‘server’ system wherein the server is the composite operator consisting of all operators within that EPN

  • This observation leads to two inferences: an intelligent system such as configuration scheduler (CS) is vital to ensure that RT meets T when EPNs are faced with the possibilities of event arrival rates fluctuating; secondly, nodes must report both RT and AR to CS in a proactive manner as proposed in §4, not just in response to significant changes they observe in AR

Read more

Summary

Introduction

Event processing is characterized by the continuous processing of streamed data tuples or events in order to evaluate, in a timely manner, the queries deployed by decision support systems. In the context of event processing, the granularity of load optimization has been DAG vertices or a sub-graph of DAG Systems, such as Aurora [5], identify operators common to multiple queries for efficient resource provisioning in a single server context. We take a coarse-grained approach to load optimization: the granularity is the state machine or the event processing network (EPN) that implements the entire DAG of a given query. Being coarse-grained has two advantages: variations in the number of queries to be processed can be dealt with, provided additional hosts are available; secondly, spare hosts need not be kept warm as low-level parallelization is not sought These advantages make our approach best suited to using cloud platforms.

System description
The architecture
Configuration scheduler
Selecting a new configuration
Validation
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.