Abstract

PurposeUncertainty is an under-respected issue when it comes to automatic assessment of human emotion by machines. The purpose of this paper is to highlight the existent approaches towards such measurement of uncertainty, and identify further research need.Design/methodology/approachThe discussion is based on a literature review.FindingsTechnical solutions towards measurement of uncertainty in automatic emotion recognition (AER) exist but need to be extended to respect a range of so far underrepresented sources of uncertainty. These then need to be integrated into systems available to general users.Research limitations/implicationsNot all sources of uncertainty in automatic emotion recognition (AER) including emotion representation and annotation can be touched upon in this communication.Practical implicationsAER systems shall be enhanced by more meaningful and complete information provision on the uncertainty underlying their estimates. Limitations of their applicability should be communicated to users.Social implicationsUsers of automatic emotion recognition technology will become aware of their limitations, potentially leading to a fairer usage in crucial application context.Originality/valueThere is no previous discussion including the technical view point on extended uncertainty measurement in automatic emotion recognition.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.