Abstract

BackgroundRandomized controlled trials (RCTs) have long been considered the primary research study design capable of eliciting causal relationships between health interventions and consequent outcomes. However, with a prolonged duration from recruitment to publication, high-cost trial implementation, and a rigid trial protocol, RCTs are perceived as an impractical evaluation methodology for most mHealth apps.ObjectiveGiven the recent development of alternative evaluation methodologies and tools to automate mHealth research, we sought to determine the breadth of these methods and the extent that they were being used in clinical trials.MethodsWe conducted a review of the ClinicalTrials.gov registry to identify and examine current clinical trials involving mHealth apps and retrieved relevant trials registered between November 2014 and November 2015.ResultsOf the 137 trials identified, 71 were found to meet inclusion criteria. The majority used a randomized controlled trial design (80%, 57/71). Study designs included 36 two-group pretest-posttest control group comparisons (51%, 36/71), 16 posttest-only control group comparisons (23%, 16/71), 7 one-group pretest-posttest designs (10%, 7/71), 2 one-shot case study designs (3%, 2/71), and 2 static-group comparisons (3%, 2/71). A total of 17 trials included a qualitative component to their methodology (24%, 17/71). Complete trial data collection required 20 months on average to complete (mean 21, SD 12). For trials with a total duration of 2 years or more (31%, 22/71), the average time from recruitment to complete data collection (mean 35 months, SD 10) was 2 years longer than the average time required to collect primary data (mean 11, SD 8). Trials had a moderate sample size of 112 participants. Two trials were conducted online (3%, 2/71) and 7 trials collected data continuously (10%, 7/68). Onsite study implementation was heavily favored (97%, 69/71). Trials with four data collection points had a longer study duration than trials with two data collection points: F4,56=3.2, P=.021, η2=0.18. Single-blinded trials had a longer data collection period compared to open trials: F2,58=3.8, P=.028, η2=0.12. Academic sponsorship was the most common form of trial funding (73%, 52/71). Trials with academic sponsorship had a longer study duration compared to industry sponsorship: F2,61=3.7, P=.030, η2=0.11. Combined, data collection frequency, study masking, sample size, and study sponsorship accounted for 32.6% of the variance in study duration: F4,55=6.6, P<.01, adjusted r2=.33. Only 7 trials had been completed at the time this retrospective review was conducted (10%, 7/71).ConclusionsmHealth evaluation methodology has not deviated from common methods, despite the need for more relevant and timely evaluations. There is a need for clinical evaluation to keep pace with the level of innovation of mHealth if it is to have meaningful impact in informing payers, providers, policy makers, and patients.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.