Abstract

Engagement with digital behavior change interventions (DBCIs) is a potentially important mediator of effectiveness; however, we lack validated measures of engagement. This study describes (a) the development of a self-report scale that captures the purported behavioral and experiential facets of engagement and (b) the evaluation of its validity in a real-world setting. A deductive approach to item generation was taken. The sample consisted of adults in the UK who drink excessively, downloaded the freely available Drink Less app with the intention to reduce alcohol consumption, and completed the scale immediately after their first login. Five types of validity (i.e., construct, criterion, predictive, incremental, divergent) were examined using exploratory factor analysis, correlational analyses, and through regressing the number of subsequent logins in the next 14 days onto total scale scores. Cronbach's α was calculated to assess internal reliability. A 10-item scale assessing amount and depth of use, interest, enjoyment, and attention was generated. Of 5,460 eligible users, only 203 (3.7%) users completed the scale. Seven items were retained, and the scale was found to be unifactorial and internally reliable (α = 0.77). Divergent and criterion validity were not established. Total scale scores were not significantly associated with the number of subsequent logins (B = 0.02; 95% CI = -0.01 to 0.05; p = .14). Behavioral and experiential indicators of engagement with DBCIs may constitute a single dimension, but low response rates to engagement surveys embedded in DBCIs may make their use impracticable in real-world settings.

Highlights

  • Some degree of “engagement” with digital behavior change interventions (DBCIs) is logically necessary for them to be effective [1]

  • Implications Practice: When deciding what measures of engagement to include in evaluations of the effectiveness of digital behavior change interventions (DBCIs), good psychometric properties must be carefully weighed against acceptability and measurement burden

  • Policy: Resources should be directed toward further development and evaluation of engagement measures to arrive at validated instruments that facilitate easy comparison across DBCIs

Read more

Summary

INTRODUCTION

Some degree of “engagement” with digital behavior change interventions (DBCIs) is logically necessary for them to be effective [1]. Implications Practice: When deciding what measures of engagement to include in evaluations of the effectiveness of digital behavior change interventions (DBCIs), good psychometric properties must be carefully weighed against acceptability and measurement burden. Despite various attempts to characterize the function relating engagement with successful behavior change [1,2,9,10], data cannot be aggregated efficiently due to the use of different definitions and measures of engagement (see [2] for a review of definitions). Many measures of engagement are currently in use (see [1,2] for overviews), including self-report scales and objective usage data, an instrument that captures both the behavioral and experiential facets of engagement is lacking. The present study aimed to develop and validate a new self-report scale that captures both the behavioral and experiential facets of engagement

MATERIALS AND METHODS
RESULTS
Which of app’s components
12. Objective amount of use
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call