TY - JOUR
T1 - Reinforcement learning-based expanded personalized diabetes treatment recommendation using South Korean electronic health records
AU - Oh, Sang Ho
AU - Park, Jongyoul
AU - Lee, Su Jin
AU - Kang, Seungyeon
AU - Mo, Jeonghoon
N1 - Publisher Copyright:
© 2022 Elsevier Ltd
PY - 2022/11/15
Y1 - 2022/11/15
N2 - Currently, electronic medical records are becoming more accessible to a growing number of researchers seeking to develop personalized healthcare recommendations to aid physicians in making better clinical decisions and treating patients. As a result, clinical decision research has become more focused on data-driven optimization. In this study, we analyze Korean patients' electronic health records—including medical history, medications, laboratory tests, and more information—shared by the national health insurance system. We aim to develop a reinforcement learning-based expanded treatment recommendation model using the health records of South Korean citizens to assist physicians. This study is significant in that expert and intelligent systems harmoniously solve the problem that directly addresses many clinical challenges in prescribing proper diabetes medication when assessing the physical state of diabetes patients. Reinforcement learning is a mechanism for determining how agents should behave in a given environment to maximize a cumulative reward. The basic model for a reinforcement learning design environment is the Markov decision process (MDP) model. Although it is effective and easy to use, the MDP model is limited by dimensionality, i.e., many details about the patients cannot be considered when building the model. To address this issue, we applied a contextual bandits approach to create a more practical model that can expand states and actions by considering several details that are crucial for patients with diabetes. Finally, we validated the performance of the proposed contextual bandits model by comparing it with existing reinforcement-learning algorithms.
AB - Currently, electronic medical records are becoming more accessible to a growing number of researchers seeking to develop personalized healthcare recommendations to aid physicians in making better clinical decisions and treating patients. As a result, clinical decision research has become more focused on data-driven optimization. In this study, we analyze Korean patients' electronic health records—including medical history, medications, laboratory tests, and more information—shared by the national health insurance system. We aim to develop a reinforcement learning-based expanded treatment recommendation model using the health records of South Korean citizens to assist physicians. This study is significant in that expert and intelligent systems harmoniously solve the problem that directly addresses many clinical challenges in prescribing proper diabetes medication when assessing the physical state of diabetes patients. Reinforcement learning is a mechanism for determining how agents should behave in a given environment to maximize a cumulative reward. The basic model for a reinforcement learning design environment is the Markov decision process (MDP) model. Although it is effective and easy to use, the MDP model is limited by dimensionality, i.e., many details about the patients cannot be considered when building the model. To address this issue, we applied a contextual bandits approach to create a more practical model that can expand states and actions by considering several details that are crucial for patients with diabetes. Finally, we validated the performance of the proposed contextual bandits model by comparing it with existing reinforcement-learning algorithms.
KW - Data-driven optimization
KW - Decision making
KW - Electronic health records
KW - Precision medicine
KW - Reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85132863834&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85132863834&partnerID=8YFLogxK
U2 - 10.1016/j.eswa.2022.117932
DO - 10.1016/j.eswa.2022.117932
M3 - Article
AN - SCOPUS:85132863834
SN - 0957-4174
VL - 206
JO - Expert Systems with Applications
JF - Expert Systems with Applications
M1 - 117932
ER -