Abstract

Journals and publishers are increasingly using artificial intelligence (AI) to screen submissions for potential misconduct, including plagiarism and data or image manipulation. While using AI can enhance the integrity of published manuscripts, it can also increase the risk of false/unsubstantiated allegations. Ambiguities related to journals’ and publishers’ responsibilities concerning fairness and transparency also raise ethical concerns. In this Topic Piece, we offer the following guidance: (1) All cases of suspected misconduct identified by AI tools should be carefully reviewed by humans to verify accuracy and ensure accountability; (2) Journals/publishers that use AI tools to detect misconduct should use only well-tested and reliable tools, remain vigilant concerning forms of misconduct that cannot be detected by these tools, and stay abreast of advancements in technology; (3) Journals/publishers should inform authors about irregularities identified by AI tools and give them a chance to respond before forwarding allegations to their institutions in accordance with Committee on Publication Ethics guidelines; (4) Journals/publishers that use AI tools to detect misconduct should screen all relevant submissions and not just random/purposefully selected submissions; and (5) Journals should inform authors about their definition of misconduct, their use of AI tools to detect misconduct, and their policies and procedures for responding to suspected cases of misconduct.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call