The rapid growth of medical data presents opportunities and challenges for healthcare professionals and researchers. To effectively process and analyze this complex and heterogeneous data, we propose evolutionary reinforcement learning with novelty-driven exploration and imitation learning for medical data processing (ERLNEIL-MDP) algorithm, including a novelty computation mechanism, an adaptive novelty-fitness selection strategy, an imitation-guided experience fusion mechanism, and an adaptive stability preservation module. The novelty computation mechanism quantifies the novelty of each policy based on its dissimilarity to the population and historical data, promoting exploration and diversity. The adaptive novelty-fitness selection strategy balances exploration and exploitation by considering policies' novelty and fitness during selection. The imitation-guided experience fusion mechanism incorporates expert knowledge and demonstrations into the learning process, accelerating the discovery of effective solutions. The adaptive stability preservation module ensures the stability and reliability of the learning process by dynamically adjusting the algorithm's hyperparameters and preserving elite policies across generations. These components work together to enhance the exploration, diversity, and stability of the learning process. The significance of this work lies in its potential to revolutionize medical data analysis, leading to more accurate diagnoses and personalized treatments. Experiments on MIMIC-III and n2c2 datasets demonstrate ERLNEIL-MDP's superior performance, achieving F1 scores of 0.933 and 0.928, respectively, representing 6.0 % and 6.7 % improvements over state-of-the-art methods. The algorithm exhibits strong convergence, high population diversity, and robustness to noise and missing data.
Read full abstract