Abstract

In response to the problems of complex implementation, low accuracy, poor applicability and high latency of some contemporary human fall detection algorithms that do not achieve real-time results, this paper proposes a fall detection based on Tiny-YOLO target detection algorithm, Kalman target trajectory tracking, Alphapose human pose recognition and spatio-temporal convolutional network, a combination of multiple algorithms. It solves the problem that human target features are difficult to extract and subsequent tracking is lost. And the training speed is accelerated to achieve fast detection of human falls through video and improve the accuracy. The algorithm is tested on the publicly available datasets UR Fall Detection Dataset and Lei2 fall Detection Dataset, and the experimental results prove that the algorithm in this paper has high detection accuracy and effectively reduces the false detection rate.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.