Abstract

Skills, are essential components in Smart Personal Assistants (SPA). The number of skills has grown rapidly, dominated by a changing environment that has no clear business model. Skills can access personal information and this may pose a risk to users. However, there is little information about how this ecosystem works, let alone the tools that can facilitate its study. In this article, we present the largest systematic measurement of the Amazon Alexa skill ecosystem to date. We study developers’ practices in this ecosystem, including how they collect and justify the need for sensitive information, by designing a methodology to identify over-privileged skills with broken privacy policies. We collect 199,295 Alexa skills and uncover that around 43% of the skills (and 50% of the developers) that request these permissions follow bad privacy practices, including (partially) broken data permissions traceability. In order to perform this kind of analysis at scale, we present <monospace xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">SkillVet</monospace> that leverages machine learning and natural language processing techniques, and generates high-accuracy prediction sets. We report several concerning practices, including how developers can bypass Alexa's permission system through account linking and conversational skills, and offer recommendations on how to improve transparency, privacy and security. Resulting from the responsible disclosure we did, 13% of the reported issues no longer pose a threat at submission time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call