Abstract

Artificial Emotional Intelligence research has focused on emotions in a limited “black box” sense, concerned only with emotions as ‘inputs/outputs’ for the system, disregarding the processes and structures that constitute the emotion itself. We’re teaching machines to act as if they can feel emotions without the capacity to actually feel emotions. Serous moral and social problems will arise if we stick with the black box approach. As A.I.’s become more integrated with our lives, humans will require more than mere emulation of emotion; we’ll need them to have ‘the real thing.’ Moral psychology suggests emotions are necessary for moral reasoning and moral behavior. Socially, the role of ‘affective computing’ foreshadows the intimate ways humans will expect emotional reciprocity from their machines. Three objections are considered and responded to: (1) it’s not possible, (2) not necessary, and (3) too dangerous to give machines genuine emotions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.