Abstract

In the era of big data, the volume, velocity and variety of data are exploding at an unprecedented pace. With the explosion of data at the web, internet companies are working towards building powerful data analytics system that can crunch the big user data available to them and offer rich business insights. However, generating analytical reports and insights from high dimensional online user data involves significant computation cost. In addition, enterprises are now also looking for low latency analytics systems to dramatically accelerate the time from data to decision by minimizing the delay between the transaction and decision. In this work, we attempt to create smaller samples from the actual online big user data, so that analytical reports can be derived from the sample data faster and at a reduced computation cost without significant trade-offs in precision and accuracy. This study empirically analyzes petabytes of traffic data across Yahoo sites and develops efficient sampling and metric computation mechanisms for large scale web traffic. We generated analytical reports containing traffic and user engagement metrics from both sampled and actual Yahoo web data and compared them in terms of latency, computation cost and accuracy to show the effectiveness of our approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.