Abstract

Building on big data from Reddit, we generated two computational text models: (i) Predicting the personality of users from the text they have written and (ii) predicting the personality of users based on the text they have consumed. The second model is novel and without precedent in the literature. We recruited active Reddit users () of fiction-writing communities. The participants completed a Big Five personality questionnaire and consented for their Reddit activity to be scraped and used to create a machine learning model. We trained an natural language processing model [Bidirectional Encoder Representations from Transformers (BERT)], predicting personality from produced text (average performance: ). We then applied this model to a new set of Reddit users (), predicted their personality based on their produced text, and trained a second BERT model to predict their predicted-personality scores based on consumed text (average performance: ). By doing so, we provide the first glimpse into the linguistic markers of personality-congruent consumed content.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call