Abstract

This paper presents a suite of memory abstraction and optimization techniques for distributed executors, with the focus on showing the performance optimization opportunities for Spark executors, which are known to outperform Hadoop MapReduce executors by leveraging Resilient Distributed Datasets (RDDs), a fundamental core of Spark. This paper makes three original contributions. First, we show that applications on Spark experience large performance deterioration, when RDD is too large to fit in memory, causing unbalanced memory utilizations and premature spilling. Second, we develop a suite of techniques to guide the configuration of RDDs in Spark executors, aiming to optimize the performance of iterative ML workloads on Spark executors when their allocated memory is sufficient for RDD caching. Third, we design DAHI, a light-weight RDD optimizer. DAHI provides three enhancements to Spark: (i) using elastic executors, instead of fixed size JVM executors; (ii) supporting coarser grained tasks and large size RDDs by enabling partial RDD caching; and (iii) automatically leveraging remote memory for secondary RDD caching in the shortage of primary RDD caching on a local node. Extensive experiments on machine learning and graph processing benchmarks show that with DAHI, the performance of ML workloads and applications on Spark improves by up to 12.4x.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.