Abstract
The present work tackles four common aspects of the human-in-the-loop challenge, one of the major questions in cyber-physical systems (CPS) related research. First, strategic decision measures are identified in a divided attention task not considered by previous studies, and their importance is shown in predicting human performance. This supports the creation of a more complete human behavior model to be integrated into CPS. Second, a generic data-driven approach is proposed for predicting human errors from eye-gaze and hand-motion features with high accuracy. Anticipating human errors facilitates efficient computer intervention and the reliable operation of complex systems. Third, it is shown that demanding gaze-based control of interfaces can be productive in terms of strategies, even though it impairs performance. This promotes intuitive interaction with computers and is especially important in cases where traditional control methods are not feasible. Fourth, an intuitive monocular vision based ego-speed estimation, and a time-to-collision prediction algorithm is investigated using as input two video streams, recording the frontal road view and the driver's perspective. Leveraging smartglasses as sensory devices and combining them with deep learning algorithms improves the decision making of human assistance systems. The results contribute to increasing human-awareness of CPS and to incorporating humans into the loop as an integral part.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.