Abstract

The concept of “all cores are created equal” has been popular for several decades due to its simplicity and effectiveness in CPU (Central Processing Unit) design. The more cores the CPU has, the higher performance the host owns and the higher the power consumption. However, power-saving is also one of the key goals for servers in data centers and embedded devices (e.g., mobile phones). The big.LITTLE multicore architecture, which contains high-performance cores (namely big core) and power-saved cores (namely little core), has been developed by ARM (Advanced RISC Machine) and Intel to trade off performance and power efficiency. Facing the new heterogeneous computing architecture, the traditional lock algorithms, which are designed to run on homogeneous computing architecture, cannot work optimally as usual and drop into the performance issue for the difference between big core and little core. In our preliminary experiment, we observed that, in the big.LITTLE multicore architecture, all these lock algorithms exhibit sub-optimal performance. The FIFO-based (First In First Out) locks experience throughput degradation, while the performance of competition-based locks can be divided into two categories. One of them is big-core-friendly, so their tail latency increases significantly; the other is little-core-friendly. Not only does the tail latency increase, but the throughput is also degraded. Motivated by this observation, we propose a Core-Aware Lock for the big.LITTLE multicore architecture named CAL, which keeps each core having an equal opportunity to access the critical section in the program. The core idea of the CAL is to take the slowdown ratio as the matric to reorder lock requests of these big and little cores. By evaluating benchmarks and a real-world application named LevelDB, CAL is confirmed to achieve fairness goals in heterogeneous computing architecture without sacrificing the performance of the big core. Compared to several traditional lock algorithms, the CAL’s fairness has increased by up to 67%; and Its throughput is 26% higher than FIFO-based locks and 53% higher than competition-based locks, respectively. In addition, the tail latency of CAL is always kept at a low level.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call