Abstract
The study examines the roles of various layers of trust, as well as privacy and security concerns, in shaping the acceptance of AI-powered facial recognition technology (FRT) in three surveillance scenarios—public spaces, hospitals, and schools. Based on survey data from 575 U S. participants, we found that the context in which FRT is deployed shapes people's perceptions and acceptance of the technology. People perceived greater safety gains in schools and greater privacy risks in public spaces. Trust in officials, familiarity with FRT, and perceived security benefits positively predicted acceptance, while distrust and perceived privacy risks negatively predicted acceptance. These findings offer insights for stakeholders of FRT, policymakers, and organizations that seek to implement AI-powered surveillance, emphasizing the need to address public trust and privacy concerns.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.