Abstract

Parkinson's Disease (PD) patients frequently transition between the 'ON' state, where medication is effective, and the 'OFF' state, affecting their quality of life. Monitoring these transitions is vital for personalized therapy. We introduced a framework based on Reinforcement Learning (RL) to detect transitions between medication states by learning from continuous movement data. Unlike traditional approaches that typically identify each state based on static data patterns, our approach focuses on understanding the dynamic patterns of change throughout the transitions, providing a more generalizable medication state monitoring method. We integrated a deep Long Short-Term Memory (LSTM) neural network and three one-class unsupervised classifiers to implement an RL-based adaptive classifier. We tested on two PD datasets: Dataset PD1 with 12 subjects (14-minute average recording) and Dataset PD2 with seven subjects (120-minute average recording). Data from wrist and ankle wearables captured transitions during 2 to 4-hour daily activities. The algorithm demonstrated its effectiveness in detecting medication states, achieving an average weighted F1-score of 82.94% when trained and tested on Dataset PD1. It performed well when trained on Dataset PD1 and tested on Dataset PD2, with a weighted F1-score of 76.67%. It surpassed other models, was resilient to severe PD symptoms, and performed well with imbalanced data. Notably, prior work has not addressed the generalizability from one dataset to another, essential for real-world applications with varied sensors. Our innovative framework revolutionizes PD monitoring, setting the stage for advanced therapeutic methods and greatly enhancing the life quality of PD patients.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call