Abstract

Artificial intelligence (AI) characterizes a new generation of technologies capable of interacting with the environment and aiming to simulate human intelligence. The success of integrating AI into organizations critically depends on workers' trust in AI technology. Trust is a central component of the interaction between people and AI, as incorrect levels of trust may cause misuse, abuse or disuse of the technology. The European Commission's High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI. This article investigates the links between trust in AI, concerns related to AI use, and the ethics related to such use. We used data collected in 2019 from more than 30,000 individuals across the EU28. The data focuses on living conditions, trust, and AI uses and concerns. An econometric model is used. The endogenous variable is an ordered measure of trust in AI. We use an ordered logit model to highlight the factors associated with an increased level of trust in AI in Europe. The results show that many concerns related to AI use are linked to AI trust, and the ability to try out AI applications will also have an impact on initial trust. To enhance trust, practitioners can try to maximize the technological features in AI systems. The representation of the AI as a humanoid or a loyal pet (e.g., a dog) will facilitate initial trust formation. Moreover, findings reveal an unequal degree of trust in AI across countries.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call