Abstract

Researchers from all across the world are interested in human fall detection and activity recognition. Fall detection is an exciting topic that may be tackled in several ways. Several alternatives have been suggested in recent years. These applications determine whether a person is average, standing, or falling, among other activities. Elderly fall detection is vital among these activities. This is because it is a pretty typical and dangerous occurrence that affects people of all ages, with the elderly having a disproportionately negative impact. Sensors are typically used in these applications to detect rapid changes in a person's movement. They can be placed in smartphones, necklaces, and smart wristbands to turn them into “wearable” gadgets. These gadgets must be attached to the victim's bodies. This may be unsettling because we must constantly monitor this type of sensor. It is not always possible, and cannot be done in public settings with strangers. In this way, video camera image-based fall detection has several advantages over wearable sensor-based systems. A vision-based solution to fall detection is presented in this research. The suggested method's key component is that it can detect falls automatically on simple images from a typical video camera, eliminating the need for ambient sensors. It performs feature extraction based on the UR Fall self-annotated RGB dataset for fall detection. We used YOLO and its variants. YOLO allows for the detection of falls and a variety of actions for multiple people in the same scenario. As a result, this method using YOLOv1-v4 and tiny YOLOv4 can be employed in real-world situations using Raspberry-pi and OAK-D for edge solutions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call