Abstract

Smart expert systems line up with various applications to enhance the quality of lifestyle of human beings, such as major applications for smart health monitoring systems. An intelligent assistive system is one such application to assist Alzheimer’s patients in carrying out day-to-day activities and real-time monitoring by the caretakers. Fall detection is one of the tasks of an assistive system; many existing methods primarily focus on either vision or sensor data. Vision-based methods suffer from false positive results because of occlusion, and sensor-based methods yield false results because of the patient’s long-term lying posture. We address this problem by proposing a multimodel fall detection system (MMFDS) with hybrid data, which includes both vision and sensor data. Random forest and long-term recurrent convolution networks (LRCN) are the primary classification algorithms for sensor data and vision data, respectively. MMFDS integrates sensor and vision data to enhance fall detection accuracy by incorporating an ensemble approach named majority voting for the hybrid data. On evaluating the proposed work on the UP fall detection dataset, accuracy was enhanced to 99.2%, with an improvement in precision, F1 score, and recall.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call