Abstract

This study investigates the public's initial trust in an artificial intelligence (AI) decision aid utilized in the delivery of public services. Amidst societal anxiety surrounding AI, the study posited that the information communicated to the public about the use of AI matters to the public's initial trust in AI. More specifically, the study hypothesized that an assurance that “humans are still in the decision loop” (HDL) makes a difference in the public's initial trust (H1), which might also depend on the stated purposes for using AI (H2). This article reports on the results from an online experiment testing these hypotheses in the context of Japan's long-term nursing care sector, based on the responses of care users and their families (N = 1542). The study did not find strong evidence to support H2. However, it found some support for H1: the proportion of those who trusted a care plan prepared with AI assistance more than a care plan not involving AI was higher by 8.95 percentage points with the HDL assurance than without. This highlights the importance of the HDL assurance and reveals respondents' reservations about a complete AI takeover in care planning.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.