Our research group at Microsoft has spent some effort over the last few years attempting to apply static analysis methods to large application programs (over a million lines of code). In the first part of the talk, I will share some of the insights we have gained along the way. The first insight is that the static analysis method of interest must scale to large programs. It must scale in terms of performance, both running time and memory requirements. The interesting complexity metric is average-case behaviour. The analysis must also scale in terms of the quality of information produced. This metric is hard to measure, and depends on the problem to be solved. The second insight is that large commercial applications differ from the benchmark programs typically used in the literature in many ways beyond sheer size: For instance, they routinely circumvent the type system, they make use of every conceivable language feature, they use large shared libraries, they contain some very large automatically generated functions, they define functions with large numbers of call sites, and they include many indirect call sites. All of these characteristics make analysis hard. In particular, they make the implementation of a scalable analysis an exercise in careful engineering. In the second part of the talk, I will make the claim that it is possible to develop scalable static analysis methods for large programs, using the following approach: First, implement an efficient algorithm, and carefully engineer it to scale. Important requirements are modularity, a complete parser, and some form of garbage collection. Next, identify the conceptual limitations of the efficient algorithm, and examine the test programs to identify common cases or idioms in the code that expose these limitations. Finally, extend the original algorithm in a principled manner to account for these common cases, without compromising performance. I will use my work on pointer analysis of large programs as an example of this approach. The key insight is that it is possible to identify common idioms in the code that are the bottlenecks for efficient algorithms. A scalable static analysis method can be developed if these idioms can be identified and accommodated.