Abstract

Voice assistants (VAs) are devices that utilize AI, machine learning, and NLP to facilitate users to perform diverse tasks verbally. VAs also have a unique feature that allows them to be “always on” so that every sound generated in their background can be analyzed and start interacting with users when they recognize their wake-up command, for instance, “Hey Siri” or “Okay Google”, which implies that VAs have to be listening to users at all time. This raises the issue of privacy in the form of perceived surveillance. This study aims to assess how perceived surveillance affects the continuance usage intention of VAs in Indonesia with the addition of personal information disclosure as a mediator. Surveillance effect model was utilized to measure perceived surveillance. The model was calculated using PLS-SEM based on online survey data (N=222) distributed over social media. It was revealed that perceived surveillance affects the continuance usage intention of VAs negatively and is partially mediated by personal information disclosure. The result also affirmed that trust, perceived risk, and prior negative experiences are predictors of perceived surveillance. Therefore, VA companies should be mindful of how their customers’ continuance usage intention is affected by how much perceived surveillance they feel.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.