Abstract
Text analysis is a form of psychological assessment that involves converting qualitative information (text) into quantitative data. We tested whether automated text analysis using Generative Pre-trained Transformers (GPTs) can match the "gold standard" of manual text analysis, even when assessing a highly nuanced construct like spirituality. In Study 1, N = 2199 US undergraduates wrote about their goals (N = 6597 texts) and completed self-reports of spirituality and theoretically related constructs (religiousness and mental health). In Study 2, N = 357 community adults wrote short essays (N = 714 texts) and completed trait self-reports, 5 weeks of daily diaries, and behavioral measures of spirituality. Trained research assistants and GPTs then coded the texts for spirituality. The GPTs performed just as well as human raters. Human- and GPT-generated scores were remarkably consistent and showed equivalent associations with other measures of spirituality and theoretically related constructs. GPTs can match the gold standard set by human raters, even in sophisticated forms of text analysis, but require a fraction of the time and labor.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.