Abstract

The existing big data platforms take data through distributed processing platforms and store them in a data lake. The architectures such as Lambda and Kappa address the real-time and batch processing of data. Such systems provide real time analytics on the raw data and delayed analytics on the curated data. The data denormalization, creation and maintenance of a columnar dimensional data warehouse is usually time consuming with no or limited support for unstructured data. The system introduced in this paper automatically creates and dynamically maintains its data warehouse as a part of its big data pipeline in addition to its data lake. It creates its data warehouse on structured, semi-structured and unstructured data. It uses Machine Learning to identify and create dimensions. It also establishes relations among data from different data sources and creates the corresponding dimensions. It dynamically optimizes the dimensions based on the crowd sourced data provided by end users and also based on query analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call