Abstract

We introduce an end-to-end reinforcement learning (RL) solution for the problem of sending personalized digital health interventions. Previous work has shown that personalized interventions can be obtained through RL using simple, discrete state information such as the recent activity performed. In reality however, such features are often not observed, but instead could be inferred from noisy, low-level sensor information obtained from mobile devices (e.g. accelerometers in mobile phones). One could first transform such raw data into discrete activities, but that could throw away important details and would require training a classifier to infer these discrete activities which would need a labeled training set. Instead, we propose to directly learn intervention strategies for the low-level sensor data end-to-end using deep neural networks and RL. We test our novel approach in a self-developed simulation environment which models, and generates, realistic sensor data for daily human activities and show the short-and long-term efficacy of sending personalized physical workout interventions using RL policies. We compare several different input representations and show that learning using raw sensor data is nearly as effective and much more flexible. CCS CONCEPTS • Computing methodologies → Reinforcement learning; Sequential decision making; Online learning settings;

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.