Abstract

Customer Relationship Management (CRM) is a systematic way of working with current and prospective customers to manage long-term relationships and interactions between the company and customers. Recently, Big Data has become a buzzword. It consists of huge data repositories, having information collected from online and offline resources, and it is hard to process such datasets with traditional data processing tools and techniques. The presented research work tries to explore the potential of Big Data to create, optimise and transform an insightful customer relationship management system by analysing large amount of datasets for enhancing customer life cycle profitability. In this research work, a dataset, “Book Crossing” is used for Big Data processing and execution time analysis for simple and complex SQL queries. This research tries to analyse the impact of data size on the query execution time for one of the majorly used Big Data frameworks, namely Apache Spark. It is a recently developed in-memory Big Data processing framework with a SPARK SQL module for efficient SQL query execution. It has been found that Apache-Spark gives better results with large size datasets compare to small size datasets and fares better as compared to Hadoop, one of the majorly used Big Data Frameworks (based on qualitative analysis).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call