Abstract

Recently, “Big Data” platform technologies have become crucial for distributed processing of diverse unstructured or semi-structured data as the amount of data generated increases rapidly. In order to effectively manage these Big Data, Cloud Computing has been playing an important role by providing scalable data storage and computing resources for competitive and economical Big Data processing. Accordingly, server virtualization technologies that are the cornerstone of Cloud Computing have attracted a lot of research interests. However, conventional hypervisor-based virtualization can cause performance degradation problems due to its heavily loaded guest operating systems and rigid resource allocations. On the other hand, container-based virtualization technology can provide the same level of service faster with a lightweight capacity by effectively eliminating the guest OS layers. In addition, container-based virtualization enables efficient cloud resource management by dynamically adjusting the allocated computing resources (e.g., CPU and memory) during the runtime through “Vertical Elasticity”. In this paper, we present our practice and experience of employing an adaptive resource utilization scheme for Big Data workloads in container-based cloud environments by leveraging the vertical elasticity of Docker, a representative container-based virtualization technique. We perform extensive experiments running several Big Data workloads on representative Big Data platforms: Apache Hadoop and Spark. During the workload executions, our adaptive resource utilization scheme periodically monitors the resource usage patterns of running containers and dynamically adjusts allocated computing resources that could result in substantial improvements in the overall system throughput.

Highlights

  • With the development of IT technologies, the amount of data generated is increasing at a tremendous rate ranging from TB to PB

  • This paper presents an adaptive container-based cloud resource management scheme especially optimized for Big Data workloads by effectively employing the vertical elasticity of Docker containers to improve the aggregated throughput of multiple data analytics applications running on the same physical platform

  • We presented our practice and experience of employing an adaptive cloud resource utilization scheme based on the container-based vertical elasticity especially optimized for Big Data workloads

Read more

Summary

Introduction

With the development of IT technologies, the amount of data generated is increasing at a tremendous rate ranging from TB to PB (or even more). They are coming from conventional structured data as in the relational database management system (RDBMS) and from diverse unstructured or semi-structured data sources. Big Data technologies can improve every part of a business from providing insights for new analytical applications to augmenting traditional on-premise systems, as the overall scale of Big Data projects grows, the complexity of managing underlying infrastructure has become challenging even for enterprises [6]. Cloud Computing has become a viable choice for tackling Big Data problems, which results in the inevitable encounter of Big Data and Cloud Computing

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.