Abstract

The application of computer vision (CV) in healthcare applications is familiar with the wireless and communication technology. CV methods are incorporated in the healthcare for providing programmed interactions towards patient monitoring. The requirements of systems are the analysis and detection of the images' visualization of patients. In this paper, a multi-modal visualization analysis (MMVA) method is introduced for improving the less-complex processing nature of programmed human-machine interactions (HMI) in health monitoring. In particular, the proposed method is designed to identify facial expressions of a patient using facial expression and textures of the input visualization. The proposed method relies on three layers of convolution neural network (CNN) for texture classification, correlation, and detection of facial visualization using stored information. The processes of the three-layers are chained to reduce the complexity and misdetection in the analysis. The feature-based tuning chain in the first layer of CNN attains to minimize the impact of facial and textural variants resulting in misdetection. The second layer is a correlation to attain the accurate matching of expression from the captured image. The third layer is facial visualization to find the quick decision and used to store the error data as the training set. The experimental results show that proposed method achieves 95.702% of recognition accuracy compared to other conventional methods. The analysis time and misdetection are minimized. Also, the recognition ratio is improved.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call