Abstract

These are exciting times for computer architecture research. Today there is significant demand to improve the performance and energy-efficiency of emerging, transformative applications which are being hammered out by the hundreds for new compute platforms and usage models. This booming growth of applications and the variety of programming languages used to create them is challenging our ability as architects to rapidly and rigorously characterize these applications. Concurrently, hardware has become more complex with the emergence of accelerators, multicore systems, and heterogeneity caused by further divergence between processor market segments. No one architect can now understand all the complexities of many systems and reason about the full impact of changes or new applications. Instead, computer architects must often rely on approximations of software behavior and hardware operation. By using their intuition when necessary and quantitative methods when possible and feasible, architects can actually function. Historically, however, advancement has been achieved most rapidly through deep quantitative understanding and evaluation of ideas and systems. Indeed, computer architecture has a long history as a quantitative field and has benefited greatly from the use of quantitative methods. Despite a large amount of literature in the area many questions remain unanswered, motivating further research. To that end, this dissertation presents four case studies in quantitative methods. Each case study attacks a different application and proposes a new measurement or analytical technique. In each case study we find at least one surprising or unintuitive result which would likely not have been found without the application of our method. In our first study, we present a new technique for using performance counters which reduces the overhead of counter reads by 23x. This reduced overhead allows us to measure the detailed behavior of several important web applications. The data we present led to a series of recommendations for future systems, many of which were surprising. For instance, modern web applications differ from popular benchmarks in at least several ways, motivating a new set of benchmarks. Although many case studies of application behavior already exist in the literature, our new technique allowed us to examine detailed behavior of production applications with unscaled inputs. The second case study examines an emerging problem in security called side-channels. In short, the sharing of resources like caches create a channel by which attackers can gain small but crucial pieces of information about other applications; in the most famous cases, attackers can deduce bits in secret encryption keys. One of our case studies attempts to quantify side-channel information leakage, allowing us to compare different systems' security and begin to understand the reasons for this leakage. Our technique can be applied at or before design time, allowing leaks to be caught and repaired before the product goes to market. We have also found a series of surprising results, indicating that intuitive understanding of existing attacks are somewhat superficial. We conclude that quantitative methods like ours are necessary for the study of side-channel information leaks. In order to deal with the large and ever changing landscape of applications, it is helpful to have methods to examine, understand, and present interesting code patterns to us. Our third case study presents a technique for mining large amounts of code to find common patterns. We present a novel method of approximate graph clustering, which enables the mining of program graphs from large code bases. It also gives us an unbiased way to find similar code across many code bases, beginning to answer questions about how to design accelerators. Short of that goal, it helps architects inform their intuitive understanding ofsoftware. Our final case study investigates the near-universal problem of malware. Despite decades of research and a multitude of commercial products, viruses and their brethren exist and indeed multiply each year. Instead of further traditional detection techniques, we ask if hardware can learn to detect malware based on its behavior. By applying machine learning techniques to data on architectural behavior, we are able to build robust, secure malware detectors. Our approach avoids large amounts of manual work which can lead to buggy code. It also led to very simple detectors which could be built into secure hardware. The case studies presented here demonstrate the utility of quantitative methods. They further our understanding of systems; allow the rapid, detailed study of new applications; help create systems which are more robust; and guide designers in the creation of hardware. In short, quantitative methods help scale the intuition wall.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.