Abstract

Pulmonary Embolism (PE) — a non-cardiac cause of cardiac arrest is a strenuous job to perform as it is non-specific in presentation and shares various overlapping features with various other clinical disorders like Myocardial Infarction, Pneumonia, etc. This paper delineates a method to identify Pulmonary Embolism (PE) as a reason for Heart Failure using Deep Reinforcement Learning (DRL) algorithm. The methodology has been partitioned into two phases. Phase I acts towards the creation of a novel dataset as there was no data available for this particular problem statement. Phase II deals with the scope and application of the DRL algorithm on the above-invented dataset. The dataset formed is imbalanced in its essence. To effectively tackle the imbalanced dataset challenge, a cutting-edge imbalanced classification algorithm rooted in the power of Deep Reinforcement Learning (DRL) has been embraced. The classification model has been drawn up as a Sequential Decision-Making procedure and is resolved by making use of the DRL algorithm viz Double Deep Q-Network (DDQN). A classification action is performed by an agent on a single sample at each time step. Classification actions are assessed by the environment, based on the assessment, a reward is given to an agent. In order to make the agent more responsive towards the minority class samples, rewards belonging to the minority class are higher. Under the supervision of this reward system, the agent finally learns the optimal classification policy for this imbalanced dataset problem. Comparative analysis of the DDQN algorithm with other Deep Reinforcement Learning algorithms, including Dueling DQN, and Proximal Policy Optimization (PPO) algorithms has been performed in order to arrive at the best learning technique for the dataset. Furthermore, the DDQN algorithm has been evaluated against a selection of Machine Learning (ML) algorithms. From the experimentation, it is demonstrated that the DDQN model outperformed other approaches with an accuracy of 99.99%, G-mean 0.9999, Recall 1.0000, and Specificity 0.9999.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call