Abstract
Yawning detection has a variety of important applications in a driver fatigue detection, well-being assessment of humans, driving behavior monitoring, operator attentiveness detection, and understanding the intentions of a person with a tongue disability. In all of the above applications, an automatic detection of yawning is one important system component. In this paper, we design and implement such automatic system, using computer vision, which runs on a computationally limited embedded smart camera platform to detect yawning. We use a significantly modified implementation of the Viola-Jones algorithm for face and mouth detections and, then, use a backprojection theory for measuring both the rate and the amount of the changes in the mouth, in order to detect yawning. As proof-of-concept, we have also implemented and tested our system on top of an actual smart camera embedded platform, called APEX from CogniVue Corporation. In our design and implementations, we took into consideration the practical aspects that many existing works ignore, such as real-time requirements of the system, as well as the limited processing power, memory, and computing capabilities of the embedded platform. Comparisons with existing methods show significant improvements in the correct yawning detection rate obtained by our proposed method.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Instrumentation and Measurement
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.