Year
Publisher
Journal
1
Institution
Institution Country
Publication Type
Field Of Study
Topics
Open Access
Language
Filter 1
Year
Publisher
Journal
1
Institution
Institution Country
Publication Type
Field Of Study
Topics
Open Access
Language
Filter 1
Export
Sort by: Relevance
Taboos and Self-Censorship Among U.S. Psychology Professors.

We identify points of conflict and consensus regarding (a) controversial empirical claims and (b) normative preferences for how controversial scholarship-and scholars-should be treated. In 2021, we conducted qualitative interviews (n = 41) to generate a quantitative survey (N = 470) of U.S. psychology professors' beliefs and values. Professors strongly disagreed on the truth status of 10 candidate taboo conclusions: For each conclusion, some professors reported 100% certainty in its veracity and others 100% certainty in its falsehood. Professors more confident in the truth of the taboo conclusions reported more self-censorship, a pattern that could bias perceived scientific consensus regarding the inaccuracy of controversial conclusions. Almost all professors worried about social sanctions if they were to express their own empirical beliefs. Tenured professors reported as much self-censorship and as much fear of consequences as untenured professors, including fear of getting fired. Most professors opposed suppressing scholarship and punishing peers on the basis of moral concerns about research conclusions and reported contempt for peers who petition to retract papers on moral grounds. Younger, more left-leaning, and female faculty were generally more opposed to controversial scholarship. These results do not resolve empirical or normative disagreements among psychology professors, but they may provide an empirical context for their discussion.

Read full abstract
Open Access
Diversity Is Diverse: Social Justice Reparations and Science

Because the term “diversity” has two related but different meanings, what authors mean when they use the term is inherently unclear. In its broad form, it refers to vast variety. In its narrow form, it refers to human demographic categories deemed deserving of special attention by social justice–oriented activists. In this article, I review Hommel’s critique of Roberts et al. (2020), which, I suggest, essentially constitutes two claims: that Roberts et al.’s (2020) call for diversity in psychological science focuses exclusively on the latter narrow form of diversity and ignores the scientific importance of diversity in the broader sense, and ignoring diversity in the broader sense is scientifically unjustified. Although Hommel’s critique is mostly justified, this is not because Roberts et al. (2020) are wrong to call for greater social justice–oriented demographic diversity in psychology but because Hommel’s call for the broader form of diversity subsumes that of Roberts et al. (2020) and has other aspects critical to creating a valid, generalizable, rigorous, and inclusive psychological science. In doing so, I also highlight omissions, limitations, and potential downsides to the narrow manner in which psychology and the broader academy are currently implementing diversity, equity, and inclusion.

Read full abstract
Open Access
The Myth of the Need for Diversity Among Subjects in Theory-Testing Research: Comments on “Racial Inequality in Psychological Research” by Roberts et al. (2020)

Roberts and colleagues focus on two aspects of racial inequality in psychological research, namely an alleged underrepresentation of racial minorities and the effects attributed to this state of affairs. My comment focuses only on one aspect, namely the assumed consequences of the lack of diversity in subject populations. Representativeness of samples is essential in survey research or applied research that examines whether a particular intervention will work for a particular population. Representativeness or diversity is not necessary in theory-testing research, where we attempt to establish laws of causality. Because theories typically apply to all of humanity, all members of humanity (even American undergraduates) are suitable for assessing the validity of theoretical hypotheses. Admittedly, the assumption that a theory applies to all of humanity is also a hypothesis that can be tested. However, to test it, we need theoretical hypotheses about specific moderating variables. Supporting a theory with a racially diverse sample does not make conclusions more valid than support from a nondiverse sample. In fact, cause-effect conclusions based on a diverse sample might not be valid for any member of that sample.

Read full abstract
Open Access
The Burden for High-Quality Online Data Collection Lies With Researchers, Not Recruitment Platforms

A recent article in Perspectives on Psychological Science (Webb & Tangney, 2022) reported a study in which just 2.6% of participants recruited on Amazon’s Mechanical Turk (MTurk) were deemed “valid.” The authors highlighted some well-established limitations of MTurk, but their central claims—that MTurk is “too good to be true” and that it captured “only 14 human beings . [out of] N = 529”—are radically misleading, yet have been repeated widely. This commentary aims to (a) correct the record (i.e., by showing that Webb and Tangney’s approach to data collection led to unusually low data quality) and (b) offer a shift in perspective for running high-quality studies online. Negative attitudes toward MTurk sometimes reflect a fundamental misunderstanding of what the platform offers and how it should be used in research. Beyond pointing to research that details strategies for effective design and recruitment on MTurk, we stress that MTurk is not suitable for every study. Effective use requires specific expertise and design considerations. Like all tools used in research—from advanced hardware to specialist software—the tool itself places constraints on what one should use it for. Ultimately, high-quality data is the responsibility of the researcher, not the crowdsourcing platform.

Read full abstract
Open Access