Abstract

Nowadays, MapReduce has become very popular in many applications, such as high performance computing. It typically consists of map, shuffle and reduce phases. As an important one among these three phases, data shuffling usually accounts for a large portion of the entire running time of MapReduce jobs. MapReduce was originally designed in scale-out architecture with inexpensive commodity machines. However, in recent years, scale-up computing architecture for MapReduce jobs has been developed. Some studies indicate that in certain cases, a powerful scale-up machine can outperform a scale-out cluster with multiple machines. With multi-processor, multi-core design connected via NUMAlink and large shared memories, NUMA architecture provides a powerful scale-up computing capability. Compared with Ethernet connection and TCP/IP network, NUMAlink has a much faster data transfer speed which can greatly expedite the data shuffling of MapReduce jobs. The impact of NUMAlink on data shuffling in NUMA scale-up architecture has not been fully investigated in previous work. In this paper, we ignore the computing power (i.e., map and reduce phases) of MapReduce, but focus on the optimization of data shuffling phase in MapReduce framework in NUMA machine. We concentrate on the various bandwidth capacities of NUMAlink(s) among different memory locations to fully utilize the network. We investigate the NUMAlink topology using SGI UV 2000 as an example and propose a topology-aware reducer placement algorithm to speed up the data shuffling phase. In addition, we extend our approach to a larger computing environment with multiple NUMA machines, and design a reducer placement scheme to expedite the inter-NUMA machine data shuffling. Experiments results show that data shuffling time can be greatly reduced in NUMA architecture with our solution.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.