Abstract

A plethora of research explores “problematic use” of technologies, but conceptualising what “problematic” refers to and how it is operationalized remains an ongoing issue. There is a lack of consistency in how cut-offs are used to distinguish “problematic” users and how this is then handled in subsequent analyses. We compared various scoring strategies common to “problematic” use research and how this impacted prevalence rates and impacts on psychosocial and behavioural variables amongst high school students. Adolescents (N = 446) completed measures of “problematic” use of smartphones, online gaming and social media, as well as self-esteem and problematic school behaviour. For each “problematic” technology use questionnaire, we divided the sample into high and low “problematic” technology use groups, using both a polythetic and a monothetic scoring method. Prevalence rates varied substantially based on the scoring method used, despite these techniques being rather interchangeable in the literature. Furthermore, logistic regressions indicated that overall, polythetic rather than monothetic methods elicited more observable differences between high and low “problematic” user groups. This suggests that consistency and consensus on scoring methods is paramount to ensure that researchers are adhering to standardised parameters.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call