Abstract

Advancements in search engines that utilize machine learning increase the likelihood that users will perceive these systems as worthy of trust.Methods. The nature and implications of trust in the context of algorithmic systems that utilize machine learning is examined and the resulting conception of trust is modelled. While current artificial intelligence does not meet the requirements of moral autonomy necessary to be considered trustworthy, people may still engage in misplaced trust based on the perception of moral autonomy. Users who place their trust in algorithmic systems limit their critical engagement with, and assessment of, the information interaction. A preliminary high-level model of trust’s role in information interactions adapting Ingwersen and Jarvelin’s Integrative Model for Interactive Information Seeking and Retrieval is proposed using the Google search engine as an example. We need to recognize that is it possible for users to react to information systems in a social manner that may lead to the formation of trust attitudes. As information professionals we want to develop interventions that will encourage users to stay critically engaged with their interactions with information systems, even when they perceive them to be autonomous.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call