Reasons for Retraction of Biomedical Articles Written by Eastern Mediterranean and Turkish Authors; A Comprehensive Cross-Sectional Study During 2010-2019
Background: Article retraction means removing a published article from the journal because of ethical issues or scientific errors in order to correct the literature. In this study, we aimed to determine the reasons for retracting biomedical articles written by authors from Iran, Saudi Arabia, Pakistan, Egypt, and Turkey. Materials and Methods: This cross-sectional study included all retracted biomedical articles with first authors affiliated with Iran, Saudi Arabia, Pakistan, Egypt, or Turkey, retracted between September 1, 2010, and September 1, 2019. Data were extracted from Retraction Watch, MEDLINE, PubMed Central (PMC), Clarivate Analytics, and Scopus. Each article’s information was entered into a data collection form and analyzed using SPSS version 24. Results: Of 436 retracted articles, Iran had the highest number (223), followed by Turkey (80), Egypt (72), Saudi Arabia (35), and Pakistan (26). Common causes of retraction included plagiarism, duplication, authorship issues, and fake peer review. In Iran, fake peer review (42.6%) and authorship issues (41.3%) were most prevalent. Significant inter-country differences were found in retraction frequency and causes. The most affected fields were biology, biochemistry, oncology, cardiovascular, surgery, and pathology. Conclusion: The results showed that scientific misconducts (plagiarism, duplication, authorship issues, and fake peer review) were the main reasons for retracting the articles in the five studied countries. To reduce such misconducts, regional regulatory policies, improved editorial practices, and enhanced research ethics training are urgently needed.
- Research Article
- 10.1002/acr.80005
- Jan 19, 2026
- Arthritis care & research
We aimed to describe the trends and main reasons for study retraction in rheumatology literature. We reviewed the Retraction Watch database to identify retracted articles in rheumatology. We recorded the main study characteristics, authors' countries, reasons for retraction, time from publication to retraction, and trends over time. Reasons for retraction were classified as scientific misconduct, data/figure errors, or other reasons. Main article features and cause of retractions in rheumatology were compared with a sample of articles from other medical specialties. A total of 381 (79.5% original articles) rheumatology articles were retracted between 1989 and 2024. Most originated from Asia (68.5%), particularly China (50.7%). Scientific misconduct accounted for 75.3% of retractions, followed by data errors (14.9%) and other reasons (7.6%). Common misconduct types included data fabrication, fake peer review, duplication, and authorship issues. The median time from publication to retraction was 18 months (interquartile range 9-46), with one-third of articles requiring more than 36 months to be retracted. Time to retraction did not improve over time. The number of retractions steadily increased over time from 18 in 2000-2009, 117 in 2010-2019, and 207 in 2020-2023 (P < 0.001). Compared with other medical specialties, rheumatology exhibited similar retraction patterns, differing mainly in geographic distribution. Retractions in rheumatology have risen substantially, largely due to misconduct. This trend may reflect an increase in questionable research practices or improved detection. Strengthening early-career education, institutional oversight, and ethical research culture is essential to enhance transparency and integrity in the field.
- Research Article
24
- 10.1057/s41599-023-02095-x
- Sep 21, 2023
- Humanities and Social Sciences Communications
Retractions of peer-reviewed biomedical journal articles with Indian authorship have been on the rise for many years. Our study aimed to investigate the reason behind these retractions, namely plagiarism, falsification, fabrication, duplicate publication, author conflicts, ethical issues, fake peer-reviews, and data-related issues, besides providing year-wise trends regarding retraction, authorship, impact factor, and citations. We retrieved retracted publications with Indian affiliations indexed in MEDLINE between 1 January 1990 to 31 December 2021. During this period a total of 619 papers from 372 different journals with median values (interquartile range) pertaining to impact factor [3.2 (1.5, 5.2)], retraction time [24 (10, 51)] months, pre-retraction citations [4 (1, 12)], and post-retraction citations [4 (2, 12)] were retracted. While retractions still account for a small fraction of all publications (0.1%), the overall rate of retractions, that is, the number of retractions relative to the number of newly published journal articles in a given year, has been increasing. The reasons for retractions included plagiarism (27%), falsification and fabrication (26%), duplicate publication (21%), erroneous data (12%), authorship issues (4%), fake-peer reviews (3%), and ethical and funding issues (2%). We have analysed these reasons separately and compared them with each other. Besides a spurt in retraction due to plagiarism, instances of falsification have been escalating over the past decade. Half of the papers retracted on grounds of falsification were published by repeat offender authors in high-impact journals. Furthermore, 82% of retracted papers continued to accumulate citations even after the release of the journal retraction notices. The increase in retractions raises concerns over research quality as well as the wastage of scientific resources, which is especially pressing considering the present environment of scarce funding. The problem of retractions due to reasons such as plagiarism, duplicate publication, authorship issues, and, ethical issues as well as post-retraction citations can be mitigated by educating and raising awareness on publication ethics and responsible research conduct of researchers and journal publishers. Retractions due to fabrication, falsification, and fake peer reviews are more challenging to difficult to address and require further research for the identification of effective solutions.
- Research Article
- 10.1016/j.opresp.2025.100496
- Sep 23, 2025
- Open Respiratory Archives
Trends and Characteristics of Retracted Articles in the Smoking Field: An Observational Study
- Conference Article
- 10.51408/issi2025_064
- Jul 10, 2025
Retracted citations remain a significant concern in academia as they perpetuate misinformation and compromise the integrity of scientific literature despite their invalidation. To analyze the impact of retracted citations, we focused on two retraction categories: plagiarism and fake peer review. The data set was sourced from Scopus and the reasons for the retraction were mapped using the Retraction Watch database. The retraction trend shows a steady average growth in plagiarism cases of 1.2 times, while the fake peer review exhibits a fluctuating pattern with an average growth of 5.5 times. Although fewer papers are retracted in the plagiarism category compared to fake peer reviews, plagiarism-related papers receive 2.5 times more citations. Furthermore, the total number of retracted citations for plagiarized papers is 1.8 times higher than that for fake peer review papers. Within the plagiarism category, 46% of the retracted citations are due to plagiarism, while 53.6% of the retracted citations in the fake peer review category are attributed to the fake peer review. The results also suggest that fake peer review cases are identified and retracted more rapidly than plagiarism cases. Finally, self-citations constitute a small percentage of citations to retracted papers but are notably higher among citations that are later retracted in both the categories.
- Research Article
1
- 10.32921/2225-9929-2024-1-55-22-28
- Jan 1, 2024
- Journal of Health Development
Introduction. Nowadays, the scientific community attaches excellent interest to the quality of scientific publications. Published papers containing inaccuracies and errors are subjected to the retraction procedure, which is essential to ensure the integrity and ethics of publications, revealing irregularities such as fabrication, falsification, conflicts of interest and plagiarism. This study aims to analyse retracted articles in medicine and public health from Kazakhstan to identify the causes of retraction and draw the scientific community's attention to improve the quality of scientific publications. Methods. In this study, the authors searched the Retraction Watch database with up-to-date information as of 18 January 2024. They found 129 retracted articles from Kazakhstan, among which 13 were related to medicine and public health. Inclusion criteria were retracted articles from Kazakhstan in the categories "Medicine" and "Public Health", excluding articles not from Kazakhstan or not related to these categories. Results. The search results showed that of the 13 retracted articles, eight were related to medical topics, and five were in the field of public health and safety. Reasons for retraction were highlighted, such as data concerns, journal investigations, lack of ethical approval, duplicate articles, fake peer review, and plagiarism. Additional analysis showed that half of the retracted articles resulted from joint work between Kazakhstani and Russian authors. Research articles were most often retracted, and the interval between publication and retraction ranged from 6 to 54 months. Conclusion. The discussion of the results emphasises the importance of adhering to ethical standards in medical research. Retraction of articles plays a crucial role in maintaining the quality and reliability of scientific publications, emphasising the need for accuracy in conducting and publishing research in the field.
- Research Article
2
- 10.56294/dm2025655
- Feb 11, 2025
- Data and Metadata
Introduction: The study analyzes emerging trends in scientific fraud, focusing on article mills, fraudulent peer reviews, and randomly generated content, practices that have transformed the dynamics of scientific retractions.Methods: With a descriptive and transversal approach, 37,480 retracted documents were analyzed between 2015 and 2024, using data from the Retraction Watch database. Information was collected on authors, countries of affiliation, dates, areas of knowledge, and reasons for retraction.Results: The results reveal a notable change in the causes of retraction. Between 2015 and 2019, plagiarism (21.6%) and duplication (14%) led, while between 2020 and 2024 they dropped to 6.8% and 4%, respectively. In this last period, article mills (30.1%), fake peer reviews (19.9%), and randomly generated content (23.3%) increased. These practices mainly affected Business, Technology and Social Sciences, with China and India leading in these fraudulent activities.Conclusions: The study concludes that these new forms of scientific fraud represent a critical challenge to the integrity of the publications system. It underscores the need to strengthen editorial policies, implement advanced screening tools, and promote ethics education to protect the credibility of global science.
- Research Article
28
- 10.1111/bju.14706
- Mar 4, 2019
- BJU International
To evaluate the landscape of retractions of literature and to determine the prevalence of research misconduct in the field of urology. Three databases (PUBMED, Embase, Retraction Watch) were queried for all retracted studies on urological topics in both urological and non-urological journals from April 1999 to March 2018. Two reviewers screened the records and determined the final list of articles to be included in the analysis. A total of 138 articles met the inclusion criteria. Over 80% of retractions occurred after 2009. Retractions originated from 76 different journals (13 urological journals) and 28 countries. The most common reasons for retraction were plagiarism (28%), fake peer review (20%), error (20%), and falsification of data (13%). Misconduct accounted for two-thirds of the retractions (n = 93). A large watermark, indicating retraction of the article, was present in 75% of the manuscripts. Articles were cited a total of 4454 times, 38% of citations happened after retraction. The majority of retracted articles related to urological oncology (70%). The highest number of retractions for an individual author was five. Rates of retraction among popular urological journals since 2010 have increased but remain a small proportion of all publications: BJUI, 0.189%; World Journal of Urology, 0.132%; European Urology, 0.058%; Urology, 0.047%; and Journal of Urology, 0.024%. Retractions of urological literature, similarly to retractions of other biomedical literature, have been rising over the last decade. The majority of these retractions stem from research misconduct. Despite retractions, flawed articles continued to be cited.
- Research Article
- 10.1111/j.1750-4910.2018.tb00020.x
- Sep 1, 2018
- Nurse Author & Editor
Retraction is an important part of the scientific method. Research that is flawed—because of error or fabrication of data—is removed from the scientific record. In clinical disciplines, practice based on unsound observations may result in patients being given (or conversely denied) effective care. Nursing is out of kilter with much of the rest of science; fewer papers are retracted and for different reasons. Why is this, and what do we need to do differently as a discipline? Established in 2010, Retraction Watch (www.retractionwatch.com) has the dual aim of tracking and reporting retractions in science. The number of papers that are retracted seems to be increasing exponentially. In 2016 Retraction Watch reported a 37% increase in retracted articles up from 500 in 2014 to 684 in 2015 (Belluz, 2). Whether this trend demonstrates an increase in scientific misconduct or improved vigilance in spotting bad science is unclear. Interestingly, misconduct is, perhaps, not the most common reason for retraction. Wager and Williams (6) reported that 40% of papers were retracted because of honest error or non-replicable findings and only 28% because of research misconduct. They also note that it is the original author (63%) who does most of the retracting. That said, there are novel reasons for retraction that may alter this arrangement. Over the past few years there has been an paroxysm of retractions because of fake peer reviews (see for example, McCook, 5) and it has been argued that this may become the leading reason for retraction over the coming few years. It is probably not helpful for me to describe how you might go about manipulating the peer review system but sufficed to say it is not terribly difficult (with the right motivation). In part, this is because editors find it increasingly challenging to find peer reviewers. Also relevant to this conversation is the observation that there is a strong correlation between the frequency of retraction and journal impact factor (put another way, publish in a high impact journal and you're more likely to get retracted) (Fang & Casadevall, 3). This observation may suggest that papers in leading journals come under more intense scrutiny from the scientific community and consequently more errors are spotted. The consequences of having a paper retracted can be profound. Researchers caught fabricating data have, quite rightly, lost their jobs, professional registrations, and have had their reputations wrecked. It is both sobering and illuminating to read the case examples published on the retraction watch blog and there are examples where nurses have featured (the case of Moon-Fai Chan is informative https://retractionwatch.com/category/by-journal/j-clinical-nursing/). The stigma associated with having a paper retracted can serve as a powerful disincentive. The unintended consequence may be that scientists who have made a genuine error fear the consequence of admitting it. Retraction in nursing science differs in a number of important ways to other disciplines. Rates of retraction were reported in a systematic review of the Journal Citation Reports (JCR) nursing science journals (Al-Ghareeb et al., 1). In total the authors identified just 29 retracted papers. The review did identify a small—statistically significant—increase in the number of papers retracted over time. There was also a significant correlation between the number of papers retracted and the journal impact factors, but the journal impact factor was negatively correlated with the number of retractions (to put it another way, if you publish in high impact nursing journal you are less likely to get retracted, which is contrary to the findings of Fang and Casadevall cited above). Two-thirds of included papers were retracted because they were duplicate publications and a one-quarter were randomized controlled trials. “Nursing science” was defined in the review as work published in JCR nursing journals. Of course, there are many journals not on this list and many nurses don't publish their “best work” in nursing journals. As one of the first reviews of retraction in nursing, there are important limitations that need to be considered. That said we need to consider how might we explain the apparent differences between nursing and science more generally? Men are seemingly more likely to have a paper retracted than women. This might of course be explained by women's under-representation in science, particularly in senior roles. In a female dominated profession such as nursing this may account—at least in part—for the lower rates of retraction in the discipline. But is it conceivable that nurses never fabricate data? Probably not. Why then have no papers been retracted because of data fabrication? In my experience nurses are not sufficiently critical of published work; there is not the intensity of academic debate over studies published in nursing journals that you might find in medicine, for example. It is this intense post-publication scrutiny that weeds out error and fraud. At the Journal of Psychiatric and Mental Health Nursing, a journal that I edit, we get few letters to the editor (in 2017 we had one letter submitted) challenging or debating papers that we have published. I think this does a disservice to the profession. Few articles in nursing science have been retracted because the author reported that they had made an error, again in stark contrast with science more generally. Is this because authors do not check papers they have published? Or if they spot an error do they not feel ethically obliged to report it? It cannot be overstated that in a clinical discipline research impacts patient care. If there is an error it is imperative that it is removed from the scientific record. In nursing science there is a preponderance of qualitative methodologies. An anecdotal observation (no one has ever counted) qualitative papers rarely seem to get retracted. Presumably qualitative researchers do fabricate (or tweak) data? It must happen that when looking for a quote to illuminate a theme an author is tempted to reinterpret or simply invent one. After all, this is an area of enquiry where interpretation is at the core of the work. Qualitative researchers could publish their data sets in a repository (as is the trend with clinical trials data) allowing other researchers to review the core data. I can't find an example of this happening. When talking to qualitative researchers the rationale they give is that it is too difficult to anonymize data. Other ways of spotting fraud in qualitative research includes the use of linguistic packages to spot if there are similarities in participants included quotes. It remains an important, but open, question as to whether research misconduct is actually less prevalent among qualitative researchers. Further work is required. I, like most editors, find it challenging to recruit peer reviewers who will produce a considered, detailed and timely review. There is some evidence that the quality of peer review is in decline, if you can measure quality by review by the number of words; in 2016 the average length of a review was 457 words, a year later this had dropped to 342 words (“It's not the size that matters,” 2018). Many of the reviews I receive have been even briefer. My point is that I think reviewers are becoming less invested in taking the time to do a detailed, thorough peer review. Checking to see if data might have been fabricated takes time and effort. It seems to me that there is a need to improve the quality of peer review but how to do this is a vexing issue that editors grapple with on an almost daily basis. Retraction is an important part of the scientific method. In nursing, this system of self-correction is not working as it should and we need to fix it. These are my five suggestions about how we might do that: We need to see a year-on-year increase in the number of nursing science papers that are retracted. Nursing is a clinical discipline; ultimately our research impacts practice. If we don't have a focus on weeding out bad science we put our patients at risk. We don't usually leave comments open on articles published on Nurse Author & Editor, but this article, to me, begs for a discussion. Dr. Gray raises lots of issues and while he presents five recommendations, there is much in this article that remains “up in the air.” Therefore, commentary from readers in encouraged. I look forward to the discussion! –LHN Richard Gray, RN, PhD is Professor of Clinical Nursing Practice, School of Nursing and Midwifery, La Trobe University, Melbourne, VIC 3086, Australia. He is also the Editor of the Journal of Psychiatric and Mental Health Nursing. You can contact him via email at: r.gray@latrobe.edu.au
- Research Article
7
- 10.2478/jdis-2023-0022
- Sep 22, 2023
- Journal of Data and Information Science
Purpose The number of retracted papers from Chinese university-affiliated hospitals is increasing, which has raised much concern. The aim of this study is to analyze the retracted papers from university-affiliated hospitals in mainland China from 2000 to 2021. Design/methodology/approach Data for 1,031 retracted papers were identified from the Web of Science Core collection database. The information of the hospitals involved was obtained from their official websites. We analyzed the chronological changes, journal distribution, discipline distribution and retraction reasons for the retracted papers. The grade and geographic locations of the hospitals involved were explored as well. Findings We found a rapid increase in the number of retracted papers, while the retraction time interval is decreasing. The main reasons for retraction are plagiarism/self-plagiarism (n=255), invalid data/images/conclusions (n=212), fake peer review (n=175) and honesty error(n=163). The disciplines are mainly distributed in oncology (n=320), pharmacology & pharmacy (n=198) and research & experimental medicine (n=166). About 43.8% of the retracted papers were from hospitals affiliated with prestigious universities. Research limitations This study fails to differentiate between retractions due to honest error and retractions due to research misconduct. We believe that there is a fundamental difference between honest error retractions and misconduct retractions. Another limitation is that authors of the retracted papers have not been analyzed in this study. Practical implications This study provides a reference for addressing research misconduct in Chinese university-affiliated hospitals. It is our recommendation that universities and hospitals should educate all their staff about the basic norms of research integrity, punish authors of scientific misconduct retracted papers, and reform the unreasonable evaluation system. Originality/value Based on the analysis of retracted papers, this study further analyzes the characteristics of institutions of retracted papers, which may deepen the research on retracted papers and provide a new perspective to understand the retraction phenomenon.
- Research Article
- 10.11606/s1518-8787.2025059006328
- Jan 1, 2025
- Revista de Saúde Pública
ABSTRACTOBJECTIVE:To characterize retractions of biomedical research papers that had a least one author affiliated with a Latin American (LATAM) institution.METHODS:We conducted a cross-sectional study of retracted research papers published in scientific journals focusing on the field of biomedical research and identified by means of the Retraction Watch database. The retracted articles identified were required to have at least one author whose institutional affiliation was in a LATAM country. Data were collected on the authors’ countries and institutional affiliations, the reason for retraction, dates of publication and retraction, indexing, journal quartile and impact factor. Reasons for retraction were categorized into three major groups, i.e., scientific misconduct, error, and not specified.RESULTS:According to Retraction Watch, 181 papers were retracted across 1987–2024 which fulfilled the inclusion criteria. Guatemala, Bolivia, Peru, Panama, Ecuador, Colombia, and Argentina were the countries that had a retraction rate above 1 per 10 thousand papers throughout the study period. The principal reason for retraction was scientific misconduct (63.0%) followed by honest error (21.5%). The main causes of retraction due to scientific misconduct were ethical and legal problems (33.1%), followed by fabrication/falsification (20.2%).CONCLUSION:The number of retractions in some LATAM countries, mainly due to scientific misconduct, highlights the need to strengthen ethical practices in research. Future initiatives should focus on developing and evaluating effective strategies to prevent misconduct and promote scientific integrity.
- Research Article
- 10.17821/srels/2023/v60i6/171172
- Dec 31, 2023
- Journal of Information and Knowledge
The study aims to examine retracted articles in the biomedical literature and inspect the characteristics of retracted papers. The PubMed database was searched for retracted articles from 2012 to 2022. Four hundred twenty-one retracted articles were identified and used to examine retraction characteristics, publishers, the impact factor of retracted articles, and reasons for retraction. China published more than one-third of the retracted articles. Four authors wrote 16.86 per cent of the retracted papers. Springer has the highest retraction rate. The retraction rate has been increasing since 2012. Of 421, 364 (86.46 per cent) had an IF (Journal Citation Reports). Reasons for retraction include plagiarism, fake peer review, duplication of an article, concerns/issues about data/error in data, error in analyses, error in methods, notice-limited or no information lack of IRB/IACUC approval, concerns/issues about referencing/attributions, lack of approval from the third party, lack of approval from author and author withdrawn. These findings suggest a need for a strict and more deliberate role of editors, reviewers, institutions and governments to emphasize the importance of avoiding research wrongdoing. This study reflects the erroneous mistakes made by the academic community to get their work published.
- Research Article
20
- 10.1177/01622439221112463
- Jul 17, 2022
- Science, Technology, & Human Values
Over the past decade, the phenomenon of "fake" peer reviews has caused growing consternation among scholarly publishers. Yet despite the significant behind-the-scenes impact that anxieties about fakery have had on peer review processes within scholarly journals, the phenomenon itself has been subject to little scholarly analysis. Rather than treating fake reviews as a straightforward descriptive category, in this article, we explore how the discourse on fake reviews emerged and why, and what it tells us about its seeming antithesis, "genuine" peer review. Our primary source of data are two influential adjudicators of scholarly publishing integrity that have been critical to the emergence of the concept of the fake review: Retraction Watch and the Committee on Publication Ethics. Via an analysis of their respective blog posts, Forum cases, presentations, and best practice guidance, we build a genealogy of the fake review discourse and highlight the variety of players involved in staking out the fake. We conclude that constant work is required to maintain clear lines of separation between genuine and fake reviews and highlight how the concept has served to reassert the boundaries between science and society in a context where they have increasingly been questioned.
- Research Article
8
- 10.1186/s13643-023-02439-3
- Jan 12, 2024
- Systematic Reviews
BackgroundThis systematic review aimed to investigate the relationship between retraction status and the methodology quality in the retracted non-Cochrane systematic review.MethodPubMed, Web of Science, and Scopus databases were searched with keywords including systematic review, meta-analysis, and retraction or retracted as a type of publication until September 2023. There were no time or language restrictions. Non-Cochrane medical systematic review studies that were retracted were included in the present study. The data related to the retraction status of the articles were extracted from the retraction notice and Retraction Watch, and the quality of the methodology was evaluated with the AMSTAR-2 checklist by two independent researchers. Data were analyzed in the Excel 2019 and SPSS 21 software.ResultOf the 282 systematic reviews, the corresponding authors of 208 (73.75%) articles were from China. The average interval between publish and retraction of the article was about 23 months and about half of the non-Cochrane systematic reviews were retracted in the last 4 years. The most common reasons for retractions were fake peer reviews and unreliable data, respectively. Editors and publishers were the most retractors or requestors for retractions. More than 86% of the retracted non-Cochrane SRs were published in journals with an impact factor above two and had a critically low quality. Items 7, 9, and 13 among the critical items of the AMSTAR-2 checklist received the lowest scores.Discussion and conclusionThere was a significant relationship between the reasons of retraction and the quality of the methodology (P-value < 0.05). Plagiarism software and using the Cope guidelines may decrease the time of retraction. In some countries, strict rules for promoting researchers increase the risk of misconduct. To avoid scientific errors and improve the quality of systematic reviews/meta-analyses (SRs/MAs), it is better to create protocol registration and retraction guidelines in each journal for SRs/MAs.
- Research Article
3
- 10.1108/gkmc-01-2024-0037
- Jul 12, 2024
- Global Knowledge, Memory and Communication
Purpose The study aims to profile the scientific retractions in the top five global universities and provide descriptive statistics on specific subjects. Design/methodology/approach The data for reasons behind retractions is manually extracted from the Retraction Watch Database. The top five global universities according to the Times Higher Education global ranking of 2024 are selected for this study. Findings The study found that Stanford University emerged with the highest number of retractions in the assessment across institutions in the field of basic life sciences and health sciences. Notably, the predominant reasons for these retractions were identified, with “unreliable results” being the most prevalent, accounting for 53 retractions. Following closely was the category of “errors in results and/or conclusions”, contributing to 51 retractions. MIT has the longest time between publication and retraction of any subject group, with an average of 1,701 days. Research limitations/implications This study has some limitations, as it only analysed the retractions of the top five global universities. Originality/value The study provides a comprehensive analysis of retractions in academic publishing, focusing on reasons, time gaps, article types and accessibility categories across prestigious universities. The paper underscores the critical role of retractions in maintaining the integrity of scientific literature, emphasizing the importance of transparent correction and responsible peer review to ensure the reliability and trustworthiness of published research. Results show that common reasons for retractions include duplication, fake peer review and plagiarism, underlining the need for ethical research standards.
- Research Article
75
- 10.1177/1747016119898400
- Jan 1, 2020
- Research Ethics
For more than 25 years, research misconduct (research fraud) is defined as fabrication, falsification, or plagiarism (FFP)—although other research misbehaviors have been also added in codes of conduct and legislations. A critical issue in deciding whether research misconduct should be subject to criminal law is its definition, because not all behaviors labeled as research misconduct qualifies as serious crime. But assuming that all FFP is fraud and all non-FFP not is far from obvious. In addition, new research misbehaviors have recently been described, such as prolific authorship, and fake peer review, or boosted such as duplication of images. The scientific community has been largely successful in keeping criminal law away from the cases of research misconduct. Alleged cases of research misconduct are usually looked into by committees of scientists usually from the same institution or university of the suspected offender in a process that often lacks transparency. Few countries have or plan to introduce independent bodies to address research misconduct; so for the coming years, most universities and research institutions will continue handling alleged research misconduct cases with their own procedures. A global operationalization of research misconduct with clear boundaries and clear criteria would be helpful. There is room for improvement in reaching global clarity on what research misconduct is, how allegations should be handled, and which sanctions are appropriate.