Abstract
This paper presents a detailed analysis of a 4D representation of events, which are generated by a dynamic stereo vision sensor for the recognition of person's fall. Dynamic vision detectors consist of self-signaling pixels that autonomously react to scene dynamics and asynchronously generate events upon relative light intensity change. Their complete on-chip redundancy reduction, wide dynamic range and high temporal resolution allow efficient and continuous activity monitoring in natural environment. Using a stereo pair of dynamic vision detectors, it is possible to represent the scene dynamics in a 4D space (including time) at a high temporal resolution. In this work, we performed 100 recordings of scenarios including falls in indoor environment using this dynamic stereo vision sensor. Seven features have been extracted and analyzed for three types of falls such that robust parameters will be kept for fall recognition. The result of this analysis is shown in this work with promising outcomes.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.