In our recent article forecasting the 2010 U.S. congressional elections, we argue for a model based on theory rather than tracking (Lewis-Beck and Tien 2010). A sound theoretical explanation of vote choice in House races should, ceteris paribus, predict better than a simple dependence on variables that proxy the vote, such as the generic ballot question. We posited a simple but classical explanation of the 2010 House vote—the referendum model—in which voters punish or reward the party in power according to its performance in office and the time available for that performance. In words, the model reads: House Seat Change = f(Economy, Popularity, Midterm). A measurement of these variables, at lags appropriate for forecasting, yields the estimates (OLS) of model 1, shown in column 1 of table 1. Model 1 gives a forecast of −22 seats for the Democrats in 2010, when, in fact, they scored about −60 seats. The model 1 forecast appears “wrong” in two senses. First, substantively, it fails to predict the Republican takeover of the House. Second, scientifically, it is off by over two standard errors of estimate (i.e., 38/17 > 2.0). Why did the model get it wrong this time, when the forecast was off by only one seat in the last midterm in 2006? To answer this question, the specifications of the model need consideration. Such consideration signals the scientific value of the forecasting exercise in providing a systematic trial-and-error method for model improvement.