Abstract

In scholarly publishing, blacklists aim to register fraudulent or deceptive journals and publishers, also known as "predatory", to minimise the spread of unreliable research and the growing of fake publishing outlets. However, blacklisting remains a very controversial activity for several reasons: there is no consensus regarding the criteria used to determine fraudulent journals, the criteria used may not always be transparent or relevant, and blacklists are rarely updated regularly. Cabell's paywalled blacklist service attempts to overcome some of these issues in reviewing fraudulent journals on the basis of transparent criteria and in providing allegedly up-to-date information at the journal entry level. We tested Cabell's blacklist to analyse whether or not it could be adopted as a reliable tool by stakeholders in scholarly communication, including our own academic library. To do so, we used a copy of Walt Crawford's Gray Open Access dataset (2012-2016) to assess the coverage of Cabell's blacklist and get insights on their methodology. Out of the 10,123 journals that we tested, 4,681 are included in Cabell's blacklist. Out of this number of journals included in the blacklist, 3,229 are empty journals, i.e. journals in which no single article has ever been published. Other collected data points to questionable weighing and reviewing methods and shows a lack of rigour in how Cabell applies its own procedures: some journals are blacklisted on the basis of 1 to 3 criteria, identical criteria are recorded multiple times in individual journal entries, discrepancies exist between reviewing dates and the criteria version used and recorded by Cabell, reviewing dates are missing, and we observed two journals blacklisted twice with a different number of violations. Based on these observations, we conclude with recommendations and suggestions that could help improve Cabell's blacklist service.

Highlights

  • As academic librarians, we are often confronted with questions from ­researchers that are unsure about the quality or serious character of particular open access journals

  • A recent cross-sectional analysis (Strinzel, Severin, Milzow, & Egger, 2019) of how whitelists (e.g. Directory of Open Access Journals (DOAJ)) and blacklists2 (e.g. Stop Predatory Journals) may help the scholarly community to tackle fraudulent publishing, for instance, shows that some journals and publishers appear to be in a gray area as they are included in both whitelists and blacklists at the same time

  • Crawford used codes in his Gray OA 2012-2016 dataset to tag the different types of questionable journals that he encountered in his examination of Beall’s lists

Read more

Summary

Introduction

We are often confronted with questions from ­researchers that are unsure about the quality or serious character of particular open access journals. We remind researchers of the existence of whitelists (e.g., the Directory of Open Access Journals – DOAJ) and of the campaign Think, Check, and Submit. This type of work is useful in identifying whether or not a journal is fake or deceptive, that is whether or not it can be considered ‘predatory’ because it requires the payment of fees while deliberately “sow[ing] confusion” and clearly deceiving both readers and authors in “deviat[ing] from best editorial and publication practices” (Grudniewicz et al, 2019). Some fake journals have managed to integrate some tools and services of third-party providers such as bibliographic databases, to which most academic libraries subscribe (see Manca, Cugusi, Dvir, & Deriu, 2017; Nelson & Huffman, 2015; Somoza-Fernández, Rodriguez-Gairón, & Urbano, 2016)

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call