The highly contagious malaria disease is spread by the female Anopheles mosquito. This disease results in a patient's death or incapacity to move their muscles, if it is not appropriately identified in the early stages. A Rapid Diagnostic Test (RDT) is a frequently used approach to find malaria cells in red blood cells. However, it might not be able to identify infections with small amounts of samples. In the microscopic detection model, blood stains are placed under a microscope for diagnosing malaria. But accurate diagnosis is hard in this method, particularly in developing nations where the disease is most common. The microscopic detection processes are expensive and time-consuming due to the usage of microscopes. The quality of the blood smears and the availability of a qualified specialist, who is skilled in recognizing the disease, impact the accuracy of malaria detection results. The traditional deep learning-based malaria identification models need more processing power. Therefore, a deep learning-based adaptive method is designed to detect malaria cells through the medical image. Hence, the images are gathered from the standard sites and then fed to the segmentation process. Here, the abnormality segmentation is carried out with the help of a developed Trans-MobileUNet + + (T-MUnet + +) network. Trans-MobileUNet + + captures global context, so it is well-suited for segmentation tasks. The segmented image is applied to the adaptive detection phase where the Adaptive and Atrous Convolution-based Recurrent MobilenetV2 (AA-CRMV2) model is designed for the effective recognition of malaria cells. The efficiency of the designed approach is elevated by optimizing the parameters from the AA-CRMV2 network with the help of the Updated Random Parameter-based Fennec Fox Optimization (URP-FFO) algorithm. Several experimental analyses are evaluated in the implemented model over classical techniques to display their effectualness rate.
Read full abstract