Abstract

Personalized policy represents a paradigm shift one decision rule for all users to an individualized decision rule for each user. Developing personalized policy in mobile health applications imposes challenges. First, for lack of adherence, data from each user are limited. Second, unmeasured contextual factors can potentially impact on decision making. Aiming to optimize immediate rewards, we propose using a generalized linear mixed modeling framework where population features and individual features are modeled as fixed and random effects, respectively, and synthesized to form the personalized policy. The group lasso type penalty is imposed to avoid overfitting of individual deviations from the population model. We examine the conditions under which the proposed method work in the presence of time-varying endogenous covariates, and provide conditional optimality and marginal consistency results of the expected immediate outcome under the estimated policies. We apply our method to develop personalized push (“prompt”) schedules in 294 app users, with the goal to maximize the prompt response rate given past app usage and other contextual factors. The proposed method compares favorably to existing estimation methods including using the R function “glmer” in a simulation study. Supplementary materials for this article are available online.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.