Abstract

AbstractWe propose a framework for integrating various modern natural language processing (NLP) models to assist researchers with developing valid psychological scales. Transformer‐based deep neural networks offer state‐of‐the‐art performance on various natural language tasks. This project adapts the transformer model GPT‐2 to learn the structure of personality items, and generate the largest openly available pool of personality items, consisting of one million new items. We then use that artificial intelligence‐based item pool (AI‐IP) to provide a subset of potential scale items for measuring a desired construct. To better recommend construct‐related items, we train a paired neural network‐based classification BERT model to predict the observed correlation between personality items using only their text. We also demonstrate how zero‐shot models can help balance desired content domains within the scale. In combination with the AI‐IP, these models narrow the large item pool to items most correlated with a set of initial items. We demonstrate the ability of this multimodel framework to develop longer cohesive scales from a small set of construct‐relevant items. We found reliability, validity, and fit equivalent for AI‐assisted scales compared to scales developed and optimized by traditional methods. By leveraging neural networks’ ability to generate text relevant to a given topic and infer semantic similarity, this project demonstrates how to support creative and open‐ended elements of the scale development process to increase the likelihood of one's initial scale being valid, and minimize the need to modify and revalidate the scale.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call