Abstract

We show that using a recent break-through in artificial intelligence –transformers–, psychological assessments from text-responses can approach theoretical upper limits in accuracy, converging with standard psychological rating scales. Text-responses use people's primary form of communication –natural language– and have been suggested as a more ecologically-valid response format than closed-ended rating scales that dominate social science. However, previous language analysis techniques left a gap between how accurately they converged with standard rating scales and how well ratings scales converge with themselves – a theoretical upper-limit in accuracy. Most recently, AI-based language analysis has gone through a transformation as nearly all of its applications, from Web search to personalized assistants (e.g., Alexa and Siri), have shown unprecedented improvement by using transformers. We evaluate transformers for estimating psychological well-being from questionnaire text- and descriptive word-responses, and find accuracies converging with rating scales that approach the theoretical upper limits (Pearson r = 0.85, p < 0.001, N = 608; in line with most metrics of rating scale reliability). These findings suggest an avenue for modernizing the ubiquitous questionnaire and ultimately opening doors to a greater understanding of the human condition.

Highlights

  • We show that using a recent break-through in artificial intelligence –transformers, psychological assessments from text-responses can approach theoretical upper limits in accuracy, converging with standard psychological rating scales

  • In personality and social psychology today, research is dominated by asking participants to express themselves in the form of numeric rating scales where complex states of mind are represented by predefined answers from a rating scale

  • Rating scales have contributed to important findings in social and personality psychology and other fields, they come with drawbacks

Read more

Summary

Introduction

We show that using a recent break-through in artificial intelligence –transformers–, psychological assessments from text-responses can approach theoretical upper limits in accuracy, converging with standard psychological rating scales. We evaluate transformers for estimating psychological well-being from questionnaire text- and descriptive word-responses, and find accuracies converging with rating scales that approach the theoretical upper limits (Pearson r = 0.85, p < 0.001, N = 608; in line with most metrics of rating scale reliability). These findings suggest an avenue for modernizing the ubiquitous questionnaire and opening doors to a greater understanding of the human condition. As an alternative to rating scales, questionnaire language-based assessments involve measuring mental states with open-ended responses that are quantified and analyzed with techniques from A­ I10. Language-based assessments’ correlations to rating scales have fallen short of the accuracy degree with which rating scales can be trusted (i.e., taking into account their reliability, measurement error) – which can be seen as a theoretical upper-limit of possible alignment to a rating scale

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.