Abstract
BackgroundWith the growing interest in mobile health (mHealth), behavioral medicine researchers are increasingly conducting intervention studies that use mobile technology (eg, to support healthy behavior change). Such studies’ scientific premises are often sound, yet there is a dearth of implementational data on which to base mHealth research methodologies. Notably, mHealth approaches must be designed to be acceptable to research participants to support meaningful engagement, but little empirical data about design factors influencing acceptability in such studies exist. ObjectiveThis study aims to evaluate the impact of two common design factors in mHealth intervention research—requiring multiple devices (eg, a study smartphone and wrist sensor) relative to requiring a single device and providing individually tailored feedback as opposed to generic content—on reported participant acceptability. MethodsA diverse US adult convenience sample (female: 104/255, 40.8%; White: 208/255, 81.6%; aged 18-74 years) was recruited to complete a web-based experiment. A 2×2 factorial design (number of devices×nature of feedback) was used. A learning module explaining the necessary concepts (eg, behavior change interventions, acceptability, and tailored content) was presented, followed by four vignettes (representing each factorial cell) that were presented to participants in a random order. The vignettes each described a hypothetical mHealth intervention study featuring different combinations of the two design factors (requiring a single device vs multiple devices and providing tailored vs generic content). Participants rated acceptability dimensions (interest, benefit, enjoyment, utility, confidence, difficulty, and overall likelihood of participating) for each study presented. ResultsReported interest, benefit, enjoyment, confidence in completing study requirements, and perceived utility were each significantly higher for studies featuring tailored (vs generic) content, and the overall estimate of the likelihood of participation was significantly higher. Ratings of interest, benefit, and perceived utility were significantly higher for studies requiring multiple devices (vs a single device); however, multiple device studies also had significantly lower ratings of confidence in completing study requirements, and participation was seen as more difficult and was associated with a lower estimated likelihood of participation. The two factors did not exhibit any evidence of statistical interactions in any of the outcomes tested. ConclusionsThe results suggest that potential research participants are sensitive to mHealth design factors. These mHealth intervention design factors may be important for initial perceptions of acceptability (in research or clinical settings). This, in turn, may be associated with participant (eg, self) selection processes, differential compliance with study or treatment processes, or retention over time.
Highlights
BackgroundMost health behavior change programs have historically required participants to attend in-person appointments or sessions, with a trained clinician or facilitator guiding intervention delivery
This study showed that differences in intervention design factors, the number of devices required and tailoring, affect various dimensions of participant acceptability for engaging in behavior change programming via mobile health (mHealth)
Some limited previous work examining acceptability for these design factors was conducted posteriori; understanding a priori perceptions is advantageous for designing mobile interventions that support up-front buy-in and acceptability from individuals for participating in mHealth research
Summary
BackgroundMost health behavior change programs have historically required participants to attend in-person appointments or sessions, with a trained clinician or facilitator guiding intervention delivery. There is still very little research formally examining evidence-based methods for designing mHealth to support uptake and adherence with end users (eg, participants enrolled in a technology-based behavior change program) or, what study design features may be inhibiting engagement ( early on such as at initial study enrollment). This issue is vital in mHealth research, given the large number of design factors and considerations in play. This, in turn, may be associated with participant (eg, self) selection processes, differential compliance with study or treatment processes, or retention over time
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.