Abstract

Vision-based activity monitoring provides applications that revolutionized the e-health sector. Considering the potential of crowdsourcing data, to develop large scale applications, the researchers are working on consolidating smart hospital with crowd sourcing data. For creating a meaningful pattern from such huge data, a key challenge is that it needs to be annotated. Especially, the annotation of medical images plays an important role in providing pervasive health services. Although, multiple image annotation methods such as manual and semi-supervised exist. However, high cost and computation time remains a major issue. To overcome the abovementioned issues, a methodology is proposed for automatic annotation of images. The proposed approach is based on three tires namely frame extraction, interest point's generation, and clustering. Since the medical imaging lacks an appropriate dataset for our experimentation. Consequently, we have introduced a new dataset of Human Health care Actions (HHA). The data set comprises of videos related to multiple medical emergencies such as allergy reactions, burn, asthma, brain injury, bleeding, poisoning, heart attack, choking and spinal injury. We have also proposed an evaluation model to assess the effectiveness of the proposed methodology. The promising results of the proposed technique indicate the effectiveness of 78% in terms of Adjusted Rand Index. Furthermore, to investigate the effectiveness of the proposed technique, a comparison is made, by training the neural network classifier with annotated labels generated by proposed methodology and other existing techniques such as semi-supervised and manual methods. The overall precision of the proposed methodology is 0.75 (i.e., 75%) and semi-supervised learning is 0.69 (69%).

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.