Abstract

Fall detection is drawing more attention from both academia and industry. The human body occupies smaller space relative to the background in images, so the complex background affects the extraction of human fall or non-fall features. In order to reduce the interference of the complex background, a fall detection method based on fused saliency maps is proposed, which consists of saliency maps generation model and fall detection model. For saliency maps generation model, M-level segmentation is to obtain segmented images in different level. The saliency detection mainly uses two-stream convolutional neural network extract global and local features to generate the saliency maps. The saliency maps fusion automatically learns the weights according to mean structural similarity for fusing saliency maps. For fall detection model, a simple deep network is constructed to extract the discriminant features of fall or non-fall, where the fused saliency maps is used. Experimental result show that the proposed method achieves 99.67% and 98.92% accuracy on UR Fall Detection and our self-built NT Fall Detection database, respectively. And the convergence speed is fastest compared with those of using RGB images and depth images. The proposed fall detection method that can reduce the interference of complex background outperforms the other methods in terms of higher accuracy and faster convergence.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.