In their recent article in this journal, Validity of the 2004 U.S. News & World Report's Rankings of Schools of Social Work, Green and colleagues (2006) offered what they called empirical support for the rankings of the educational quality of the graduate of social work. Although rebutting the commercialism of the Report and its use of a single rating survey, the authors take measures only to show that other objective variables of success bear out those same commercialized ratings. Specifically, they used what they called the traditional measures that have been used over a long time, including admission selectivity, faculty publication, and the longevity of the program. In addition, they surveyed deans, directors, practitioners, graduate faculty, and graduate students. The authors asked respondents to rate only those with which they were familiar and consider their record of scholarship, curriculum, and quality of the faculty and the graduates. Their findings bore out the positive correlation between their objective measures and the U.S. News & World Report (USNWR) rankings. Comparing ourselves with others is natural. We have been doing it since the beginning of time: warrior against warrior, man against woman, white against black. Indeed, there may be value in setting benchmarks or standards of excellence that guide our development. Yet, with the personal respect I hold for the authors of the article, I wish to send up the flags of caution and challenge some of the assumptions of their work and the work of others who, de facto, have created a hierarchy of excellence that I think is suspect. INHERENT PROBLEMS WITH REPUTATIONAL STUDIES First, let me point out that the authors' own text indicates that those surveyed should only evaluate the schools with which they were familiar. In fact, the USNWR purports to rank only the master's degree programs, not the schools. From reading the promotional materials of the highly ranked schools, one would not know that only the master's is ranked. Rather, those materials more likely generalize those findings to the entire school. As one reads the article, it is clear that they authors also fail to make this distinction very clear. Second, as someone who served several terms on the Commission on Accreditation of the Council on Social Work Education and as one who currently provides consultation to graduate social work programs, I have reviewed most of the social work master's degree programs in the country. Given changes in curriculum, personnel, and leadership, I think it is fair to assume that very few of us really know much about the specifics of many other graduate curriculums. Third, what we know about other programs often comes from collegial relationships with faculty members from other schools, the hiring of their doctoral graduates, the scholarship that may come from faculty, and a few other bits of data that may not be generalizable by our own standards of research. Yet, these elements often provide the basis on which we think we know another program. Fourth, the validity measure in this study has been determined using the criteria of admission selectivity, faculty publication, and the longevity of the program. I suppose, as the authors have done, others will make an argument that these are the most important variables. After all, they are traditional. Has anyone raised an issue about whether they all factor with the same weight in determining rankings? That is, should we consider them to be of equal significance and value in establishing which ranks higher than another? And are they truly the best measures? Informally, I have asked my colleague deans to tell me what they know about other master's degree programs, and invariably I am guided to comments about great university, great doctoral program great scholarship machine, and even great dean. …
Read full abstract