Abstract

Revolutionizing systematic reviews: the ASReview project's call for trust and transparency in AI-aided screening tools In today's era of information overload, the number of scientific papers and policy reports on any given topic is increasing at an unprecedented rate, argues Full Professor Rens van de Schoot at Utrecht University. While active learning is an effective tool for screening large amounts of textual data and can save up to 95% of screening time, scholars often develop narrower searches, increasing the risk of missing relevant studies. Looking to ensure transparency and trustworthiness when screening literature for relevant studies – unlike a simple Google search or asking chatbots like ChatGPT a question – Professor van de Schoot notes that systematic reviews are critical for scholars, clinicians, policy-makers, journalists, and ultimately, the general public, but aims for more transparency. To ensure the trustworthy implementation of AI in the screening process, software tools need to be fully transparent. The exact algorithms they use to produce the ranking scores should be known, or even better, the source code needs to be available so experts can check (and adjust) the implemented algorithms. He argues that the decisions made by the AI throughout the process should be transparent for the identification and correction of mistakes made by both humans and machines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call