Abstract

Suboptimal health related behaviors and habits; and resulting chronic diseases are responsible for majority of deaths globally. Studies show that providing personalized support to patients yield improved results by preventing and/or timely treatment of these problems. Digital, just-in-time and adaptive interventions are mobile phone-based notifications that are being utilized to support people wherever and whenever necessary in coping with their health problems. In this research, we propose a reinforcement learning-based mechanism to personalize interventions in terms of timing, frequency and preferred type(s). We simultaneously employ two reinforcement learning models, namely intervention-selection and opportune-moment-identification; capturing and exploiting changes in people's long-term and momentary contexts respectively. While the intervention-selection model adapts the intervention delivery with respect to type and frequency, the opportune-moment-identification model tries to find the most opportune moments to deliver interventions throughout a day. We propose two accelerator techniques over the standard reinforcement learning algorithms to boost learning performance. First, we propose a customized version of eligibility traces for rewarding past actions throughout an agent's trajectory. Second, we utilize the transfer learning method to reuse knowledge across multiple learning environments. We validate the proposed approach in a simulated experiment where we simulate four personas differing in their daily activities, preferences on specific intervention types and attitudes towards the targeted behavior. Our experiments show that the proposed approach yields better results compared to the standard reinforcement learning algorithms and successfully capture the simulated variations associated with the personas.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.