Abstract

Technologies using artificial intelligence to recognize people's emotional states are increasingly being developed under the name of emotional recognition technologies. Emotion recognition technologies claim to identify people's emotional states based on data, like facial expressions. This is despite research providing counterevidence that emotion recognition technologies are founded on bad science and that it is not possible to correctly identify people's emotions in this way. The use of emotion recognition technologies is widespread, and they can be harmful when they are used in the workplace, especially for autistic workers. Although previous research has shown that the origins of emotion recognition technologies relied on autistic people, there has been little research on the impact of emotion recognition technologies on autistic people when it is used in the workplace. Through a review of recent academic studies, this article looks at the development and implementation processes of emotion recognition technologies to show how autistic people in particular may be disadvantaged or harmed by the development and use of the technologies. This article closes with a call for more research on autistic people's perception of the technologies and their impact, with involvement from diverse participants.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.