Abstract
We find existing benchmark suites for smartphone CPU micro-architecture design such as Geekbench 5.0 fail to authentically represent the micro-architecture level performance behavior of widely used real Android applications with interactive operations such as screen sliding. It is therefore crucial to systematically construct a benchmark suite as a supplementary to Geekbench to represent the user interaction behavior of Android applications for CPU micro-architecture design. The key is to identify a small number of representative programs from a large number of real applications. To this end, a set of features used to represent a program need to be constructed, and these features should be fair for different micro-architectures and can be collected efficiently. However, this is extremely difficult for Android applications. For example, the feature collection tools for Android applications are unavailable for benchmark selection. In this paper, we propose a novel benchmark suite construction approach dubbed BEMAP to efficiently build a supplementary benchmark suite from real-world Android applications to represent their user interaction behavior. 1 BEMAP innovates four techniques. The first technique, called two-stage RFC (representative feature construction), constructs program features from performance counters (events) to represent a program for selecting benchmarks from a large number of real Android applications in two stages. The first stage identifies a set of important performance events in terms of IPC (instructions per cycle) by employing a machine learning algorithm named SGBRT (Stochastic Gradient Boosted Regression Tree). The second stage constructs representative features based on the important performance events by using ICA (independent component analysis). The second technique, named SPC-MMA (source performance counters from multiple micro-architectures), collects the performance events from multiple mobile CPUs with different micro-architectures and mixes them as the source of RFC. The goal of these two innovations is to make the program features fair to different mobile CPU micro-architectures. The third technique, called ES (Elbow-Silhouette) approach, artfully leverages the synergy between the elbow method and the silhouette method to determine an optimal K when we use K-Means to group Android applications. The fourth technique is that we design a new tool named AutoProfiler to automatically profile the micro-architecture events (e.g., IPC, L1 Icache misses) of Android applications with interactive operations. Using the proposed BEMAP methodology 2 , we constructed SPBench, a novel benchmark suite supplementary to traditional mobile benchmark suites like Geekbench, for mobile CPU micro-architecture design. It consists of fifteen benchmarks selected from one hundred real Android applications with three common user interaction operations, which is the fifth innovation of this paper. The experimental results on four significantly different micro-architectures show that SPBench can represent the micro-architecture performance behaviors of the one hundred real-world applications with three common user interactive operations on each micro-architecture with significantly higher accuracy than benchmark suites produced by the state-of-the-art approaches.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
More From: ACM Transactions on Architecture and Code Optimization
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.