Abstract
Anemia detection using multimodal approaches leverages the integration of multiple data sources, such as imaging, clinical records, and hematological parameters, to improve diagnostic accuracy. Such methods can capture the complex interplay of factors contributing to anemia, providing a more comprehensive assessment than traditional single-modality techniques. In this research, a novel deep learning multi-modal feature fusion approach is proposed for the automated detection of anemia using EHRs (Electronic Health Records), and Conjunctiva image dataset. First, EHR records are preporcessed by selecting the most appropriate features using Random Forest. The features from the conjunctiva images are extracted using RCBAM (Reverse Convolution Block Attention Mechanism). After that, GRAD-Cam algorithm is applied to calculate the pixel percentages of all the features. The output of Random Forest and GRAD-Cam algorithms is concatenated to form a multimodal fusion. The important information from the concatenated features is selected with the help of a professional healthcare consultant. The different experiments are performed on textual and image datasets individually and after concatenating. The results show that the proposed model outperforms from state-of-the-art methods with an accuracy of 95%. Despite challenges such as class imbalance and computational demands, our findings reveal substantial clinical potential, offering a patient-friendly and accessible diagnostic solution.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.