Abstract

Extensive usage of Internet based applications in day to day life has led to generation of huge amounts of data every minute. Apart from humans, data is generated by machines like sensors, satellite, CCTV etc. This huge collection of heterogeneous data is often referred as Big Data which can be processed to draw useful insights. Apache Hadoop has emerged has widely used open source software framework for Big Data Processing and it is a cluster of cooperative computers enabling distributed parallel processing. Hadoop Distributed File System is used to store data blocks replicated and spanned across different nodes. HDFS uses an AES based cryptographic techniques at block level which is transparent and end to end in nature. However cryptography provides security from unauthorized access to the data blocks, but a legitimate user can still harm the data. One such example was execution of malicious map reduce jar files by legitimate user which can harm the data in the HDFS. We developed a mechanism where every map reduce jar will be tested by our sandbox security to ensure the jar is not malicious and suspicious jar files are not allowed to process the data in the HDFS. This feature is not present in the existing Apache Hadoop framework and our work is made available in github for consideration and inclusion in the future versions of Apache Hadoop.

Highlights

  • Apache Hadoop has emerged as the widely used open source framework for Big Data Processing

  • In existing Hadoop framework, a legitimate user can execute any map reduce job using Hadoop jar command and the Job tracker daemon present inside Namenode will forward the same to the respective data nodes where Task trackers will invoke a new instance of Java Virtual Machine (JVM) for every file block to execute the map reduce job in a distributed parallel processing manner

  • Our work provides a sandboxing facility where unwanted or harmful jar files can be prevented from executing on the file system

Read more

Summary

Introduction

Apache Hadoop has emerged as the widely used open source framework for Big Data Processing. Big Data processing is used in healthcare, social media, banking, insurance, good governance, stock markets, retail and supply chain, ecommerce, education and scientific research etc. To gain deep insights of the data, their associations and make better decisions [1]. Apache Hadoop addresses the two major challenges of Big Data viz. Data is stored in Hadoop using HDFS and processing through Map Reduce Programming. Apache Hadoop is a cluster of cooperative computers. The anatomy of Hadoop cluster can be understood from the Fig. 1

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call