The Internet of Medical Things (IoMT) is an extended version of the Internet of Things (IoT). It mainly concentrates on the integration of medical things for servicing needy people who cannot get medical services easily, especially rural area people and aged peoples living alone. The main objective of this work is to design a real time interactive system for providing medical services to the needy who do not have a sufficient medical infrastructure. With the help of this system, people will get medical services at their end with minimal medical infrastructure and less treatment cost. However, the designed system could be upgraded to address the family of SARs viruses, and for experimentation, we have taken COVID-19 as a test case. The proposed system comprises of many modules, such as the user interface, analytics, cloud, etc. The proposed user interface is designed for interactive data collection. At the initial stage, it collects preliminary medical information, such as the pulse oxygen rate and RT-PCR results. With the help of a pulse oximeter, they could get the pulse oxygen level. With the help of swap test kit, they could find COVID-19 positivity. That information is uploaded as preliminary information to the designed proposed system via the designed UI. If the system identifies the COVID positivity, it requests that the person upload X-ray/CT images for ranking the severity of the disease. The system is designed for multi-model data. Hence, it can deal with X-ray, CT images, and textual data (RT-PCR results). Once X-ray/CT images are collected via the designed UI, those images are forwarded to the designed AI module for analytics. The proposed AI system is designed for multi-disease classification. It classifies the patients affected with COVID-19 or pneumonia or any other viral infection. It also measures the intensity level of lung infection for providing suitable treatment to the patients. Numerous deep convolution neural network (DCNN) architectures are available for medical image classification. We used ResNet-50, ResNet-100, ResNet-101, VGG 16, and VGG 19 for better classification. From the experimentation, it observed that ResNet101 and VGG 19 outperform, with an accuracy of 97% for CT images. ResNet101 outperforms with an accuracy of 98% for X-ray images. For obtaining enhanced accuracy, we used a major voting classifier. It combines all the classifiers result and presents the majority voted one. It results in reduced classifier bias. Finally, the proposed system presents an automatic test summary report textually. It can be accessed via user-friendly graphical user interface (GUI). It results in a reduced report generation time and individual bias.
Read full abstract