Abstract

Data has become an integral part of human life in the modern world. As there is increase in volume and different varieties of data available, big data domain has become a powerful area. Big data uses more resources for the processing. This paper focuses on resources utilized in big data environment to perform the operations. It uses various resources like memory, files and network bandwidth. Hadoop framework handles data distributed over cluster.Main concentrationof this paper is on the usage of memory space or disk space. Experimentation is carried out through map reduce programs on different versions of Hadoop by changing some of the parameter values in the configuration files.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call