Abstract

Advanced Driver Assistance Systems (ADAS) are experiencing higher levels of automation, facilitated by the synergy among various sensors integrated within vehicles, thereby forming an Internet of Things (IoT) framework. Among these sensors, cameras have emerged as valuable tools for detecting driver fatigue and distraction. This study introduces HYDE-F, a Head Pose Estimation (HPE) system exclusively utilizing depth cameras. HYDE-F adeptly identifies critical driver head poses associated with risky conditions, thus enhancing the safety of IoT-enabled ADAS. The core of HYDE-F’s innovation lies in its dual-process approach: it employs a fractal encoding technique and keypoint intensity analysis in parallel. These two processes are then fused using an optimization algorithm, enabling HYDE-F to blend the strengths of both methods for enhanced accuracy. Evaluations conducted on a specialized driving dataset, Pandora, demonstrate HYDE-F’s competitive performance compared to existing methods, surpassing current techniques in terms of average Mean Absolute Error (MAE) by nearly 1 ∘ . Moreover, case studies highlight the successful integration of HYDE-F with vehicle sensors. Additionally, HYDE-F exhibits robust generalization capabilities, as evidenced by experiments conducted on standard laboratory-based HPE datasets, i.e., Biwi and ICT-3DHP databases, achieving an average MAE of 4.9 ∘ and 5 ∘ , respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call