Abstract

Understanding collective behavior requires a social-evolutionary perspective. Colarelli (September 1998) offered a worthwhile introduction to this perspective and identified how it can be used to predict stability and change in organizations. One element of this social-evolutionary perspective is of such fundamental importance that it merits greater clarification and elaboration. Colarelli (1998) stated slippery nature of organizational goals makes the relationships between interventions and organizational outcomes problematic, and what may work at one of a nested hierarchy may not necessarily be functional at another level (p. 1050). This is true and important, but it fails to specify a social-evolutionary insight that has profound consequences: Interventions intended to promote organizational goals will be successful only when these interventions also promote the goals of the individuals working within the organization. Evolutionary selection operates primarily on individuals, not on populations. In biological evolution, genetic information is most likely to be passed on when it has positive functional consequences for individual organisms. In social evolution, information (e.g., an idea) is most likely to be retained and replicated when it does some good for the individuals who encounter it. For example, stereotypic beliefs more readily become normative when those beliefs (more so than others) serve the immediate impressionmanagement goals of individuals (Schaller & Conway, 1999). In general, innovative ideas become popular and interventions succeed not because they serve organization goals, but because they serve the salient needs of individuals. Sometimes abstract organizational goals are commensurate with the short-term individual needs; but when they aren't, interventions that are good for the health of an organization may not succeed at all. Indeed, behaviors that have negative consequences on organizational health—but that serve individuals' immediate needs—may become culturally entrenched and difficult to change. Consider one relevant organizational culture: The culture of psychological science. As in other cultures, scientific norms emerge and endure as a result of processes operating on individuals (Hull, 1988; Kitcher, 1993). Although one might assume that these norms serve the progress of science, these norms more directly serve the needs of scientists and are sometimes counterproductive to scientific progress. Many innovations that might better serve the goals of science fail to catch on. One obvious example is the nearceremonial reliance on null hypothesis significance testing. For decades, scientists have been aware of the inferential limitations of significance testing. For decades, there have been impassioned pleas to use additional analytic methods that would facilitate discovery and accurate description of psychological phenomena, and so would promote the progress of psychological science (e.g., Mulaik, Harlow, & Steiger, 1997). But these methods pose costs to individuals. Scientists would have to learn new statistical techniques and revamp the way they teach their students, and they would have to deviate from comfortable traditions of statistical reporting. Although these individual costs are modest and almost everyone acknowledges that other statistical methods could serve the science better, the imperfect old statistical rituals persist. A second example pertains to scientists' critical appraisal of innovative theories. The progress of science depends on the publication of ideas that transcend accepted wisdom. However, compared with older, more familiar ideas, brand-new ideas are perceived to have a greater likelihood of being wrong. The self-correcting tendency of scientific inquiry ensures that the systemic costs of publishing mistaken ideas is minimal compared with the benefits of innovation. But these systemic outcomes are largely irrelevant to individual scientists, to whom the publication of seemingly true but actually erroneous research poses meaningful hazards: If theories are wrong or research results are in error, then every one who uses them has their research set back (Hull, 1988, p. 311). So, although it serves science best to nurture and support conceptual breakthroughs, individual scientists' needs for certainty can lead them to act otherwise. In fact, research evidence reveals some antinovelty bias within the manuscript review process—especially under conditions in which individual scientists are at the greatest risk of being compelled by the innovative ideas (Crandall & Schaller, 1998). Given these individual-level selection pressures, it is no surprise that deliberate attempts to foster conceptual innovation (e.g., McGuire, 1973) have had little systemic impact (Higgins, 1992).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.