Abstract

Recent years have seen formidable advances in artificial intelligence. Developments include a large number of specialised systems either existing or planned for use in scientific research, data analysis, translation, text production and design with grammar checking and stylistic revision, plagiarism detection, and scientific review in addition to general-purpose AI systems for searching the internet and generative AI systems for texts, images, videos, and musical compositions. These systems promise more ease and simplicity in many aspects of work. Blind trust in AI systems with uncritical, careless use of AI results is dangerous, as these systems do not have any inherent understanding of the content they process or generate, but only simulate this understanding by reproducing statistical patterns extracted from training data. This article discusses the potential and risk of using AI in scientific communication and explores potential systemic consequences of widespread AI implementation in this context.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.