Using an extremely rich data series, this paper traces trends in America's most important problem from 1946 to 1976. Both long-term and short-term changes in public concern are charted. Additionally, the problem profiles of major sociodemographic groups are analyzed. Changes in the problem concerns of these groups are also followed across time. Tom W. Smith is an Associate Study Director at the National Opinion Research Center, University of Chicago. This article is a revised version of a paper presented at the 34th Annual AAPOR Conference, Buck Hill Falls, Pennsylvania, June 1979. Public Opinion Quarterly ? 1980 by The Trustees of Columbia University Published by Elsevier North Holland, Inc. 0033-362X/80/0044-164/$1.75 This content downloaded from 157.55.39.104 on Mon, 20 Jun 2016 05:47:46 UTC All use subject to http://about.jstor.org/terms AMERICA'S MOST IMPORTANT PROBLEM 165 excluding unavailable studies and variant question wordings, there remain 125 surveys from 1946 to 1976 with usable marginals.2 The standard usage asks, What do you think is the most important problem facing this country today? (See the Appendix for the variations and occurrences.) The frame of reference is the country at large, and responses inevitably deal with national or even global concerns rather than local or personal problems. The question also elicits a relative ranking of problems, not an absolute measurement of the level of anxiety in general. All problems compete for the public's attention, and the selection of one concern as most important necessitates the rejection of all others. Over time, it is impossible to determine if people are more or less worried in toto or whether they have absolutely a greater or smaller amount of concern about a particular issue. The changing priority assigned to problems can, however, be charted. Thus, the relative rating of national problems can be followed for the entire postwar period.3 Given the historical depth, frequency, saliency, and topicality of the most-important-problem (MIP) data, it is surprising that they have received only limited use. There has been little attempt to chart secular trends or short-term shifts in opinion, and little interest has 2 Among the excluded versions were those that limited responses to a set of announced choices, those that focused on the family, the local community, or some other restricted area, and those that referred to the United States Congress. 3 Two technical features of the most-important-problem question need to be dealt with-the matters of multiple responses and of response categories. In 106 of the 125 surveys multiple responses were permitted, in 13 only single answers were allowed, and in 2 instances the status is uncertain. The ratio of responses to respondents ranges from 1.01 to 1.35. To handle the multiple responses a new SPSS program, MULT RESPONSE, was employed. This program treats the responses rather than the respondents as the unit of analysis and thereby solves the problem of the extra responses. The second matter that had to be dealt with was the large number of response categories on any particular survey (up to 35) and the even greater number (over 100) that appears among surveys. To make the data analytically manageable and comparable, responses were grouped into four broad categories-foreign affairs, domestic issues, unclassifiable items, and don't knows. Foreign affairs (including war fears, military preparedness, and space) was subdivided into Vietnam-related and other. Domestic issues were partitioned into economic (inflation, unemployment, labor, etc.), social control (crime, violence, moral decline, etc.), civil rights (excluding race riots), government (corruption, lack of leadership, inefficiency, etc.) miscellaneous, listed topics (e.g., education, slums, and environment), and unspecified others. Unclassifiable items consist of categories that could not be assigned to either the foreign or domestic sectors (most frequently the response Communism). These nine groups separated problems into substantively distinct areas that were comparable over time and of adequate size. Checks were also made to see whether the marginals were influenced by such artifacts as (1) variations in question wording, (2) number of responses allowed, (3) coding categories, (4) context and placement, and (5) changes in sample design. Except where noted, none of these factors was found to have an important effect. This content downloaded from 157.55.39.104 on Mon, 20 Jun 2016 05:47:46 UTC All use subject to http://about.jstor.org/terms