Abstract

BackgroundRandomized controlled trials (RCTs) with vigorous study designs are vital for determining the efficacy of treatments. Despite the high internal validity attributed to RCTs, external validity concerns limit the generalizability of results to the general population. Bias can be introduced, for example, when study participants who self-select into a trial are more motivated to comply with study conditions than are other individuals. These external validity considerations extend to e-mental health (eMH) research, especially when eMH tools are designed for public access and provide minimal or no supervision.ObjectiveClustering techniques were employed to identify engagement profiles of RCT participants and community users of a self-guided eMH program. This exploratory approach inspected actual, not theorized, RCT participant and community user engagement patterns. Both samples had access to the eMH program over the same time period and received identical usage recommendations on the eMH program website. The aim of this study is to help gauge expectations of similarities and differences in usage behaviors of an eMH tool across evaluation and naturalistic contexts.MethodsAustralian adults signed up to myCompass, a self-guided online treatment program created to reduce mild to moderate symptoms of negative emotions. They did so either by being part of an RCT onboarding (160/231, 69.6% female) or by accessing the program freely on the internet (5563/8391, 66.30% female) between October 2011 and October 2012. During registration, RCT participants and community users provided basic demographic information. Usage metrics (number of logins, trackings, and learning activities) were recorded by the system.ResultsSamples at sign-up differed significantly in age (P=.003), with community users being on average 3 years older (mean 41.78, SD 13.64) than RCT participants (mean 38.79, SD 10.73). Furthermore, frequency of program use was higher for RCT participants on all usage metrics compared to community users through the first 49 days after registration (all P values <.001). Two-step cluster analyses revealed 3 user groups in the RCT sample (Nonstarters, 10-Timers, and 30+-Timers) and 2 user groups in the community samples (2-Timers and 20-Timers). Groups seemed comparable in patterns of use but differed in magnitude, with RCT participant usage groups showing more frequent engagement than community usage groups. Only the high-usage group among RCT participants approached myCompass usage recommendations.ConclusionsFindings suggested that external validity concerns of RCT designs may arise with regards to the predicted magnitude of eMH program use rather than overall usage styles. Following up RCT nonstarters may help provide unique insights into why individuals choose not to engage with an eMH program despite generally being willing to participate in an eMH evaluation study. Overestimating frequency of engagement with eMH tools may have theoretical implications and potentially impact economic considerations for plans to disseminate these tools to the general public.

Highlights

  • Well-designed randomized controlled trials (RCTs) are widely seen as the gold standard for determining treatment efficacy as random assignment to either a treatment or control group allows for the isolation of the treatment effect from both known and unknown confounding factors [1]

  • Findings suggested that external validity concerns of Randomized controlled trials (RCTs) designs may arise with regards to the predicted magnitude of e-mental health (eMH) program use rather than overall usage styles

  • Following up RCT nonstarters may help provide unique insights into why individuals choose not to engage with an eMH program despite generally being willing to participate in an eMH evaluation study

Read more

Summary

Introduction

Well-designed randomized controlled trials (RCTs) are widely seen as the gold standard for determining treatment efficacy as random assignment to either a treatment or control group allows for the isolation of the treatment effect from both known and unknown confounding factors [1]. These concerns relate to participant selection, attention, retention, researcher contact, and specifics and frequency of data collection—all of which can limit the generalizability of findings to the general public Such external validity considerations may be even more justified when considering those RCTs that evaluate e-mental health (eMH) programs, which are designed to deliver effective, scalable mental health care in the community [3,4,5,6]. For example, when study participants who self-select into a trial are more motivated to comply with study conditions than are other individuals These external validity considerations extend to e-mental health (eMH) research, especially when eMH tools are designed for public access and provide minimal or no supervision

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call