Abstract

Currently, electronic medical records are becoming more accessible to a growing number of researchers seeking to develop personalized healthcare recommendations to aid physicians in making better clinical decisions and treating patients. As a result, clinical decision research has become more focused on data-driven optimization. In this study, we analyze Korean patients' electronic health records—including medical history, medications, laboratory tests, and more information—shared by the national health insurance system. We aim to develop a reinforcement learning-based expanded treatment recommendation model using the health records of South Korean citizens to assist physicians. This study is significant in that expert and intelligent systems harmoniously solve the problem that directly addresses many clinical challenges in prescribing proper diabetes medication when assessing the physical state of diabetes patients. Reinforcement learning is a mechanism for determining how agents should behave in a given environment to maximize a cumulative reward. The basic model for a reinforcement learning design environment is the Markov decision process (MDP) model. Although it is effective and easy to use, the MDP model is limited by dimensionality, i.e., many details about the patients cannot be considered when building the model. To address this issue, we applied a contextual bandits approach to create a more practical model that can expand states and actions by considering several details that are crucial for patients with diabetes. Finally, we validated the performance of the proposed contextual bandits model by comparing it with existing reinforcement-learning algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call