Abstract

The adoption of Advanced Driver Assistance Systems (ADAS) has expanded dramatically in recent years, with the goal of improving road safety and driving comfort. Driver monitoring is important to ADAS since it identifies abnormalities such as sleepiness, distraction, and impairment to guarantee safe vehicle operation. Traditional methods of detecting driver anomalies rely on intrusive physiological measures, while ADAS with built-in cameras offers a non-intrusive and cost-effective option. This study investigates the application of ensemble model learning for driver anomaly detection in automobiles employing ADAS and in-vehicle cameras. Deep learning models such as ResNet50, DenseNet201, and Inception V3 were deployed as learner models to classify driving behavior. The raw dataset used in this study was in the form of videos obtained from the National Tsinghua Driver Drowsiness Detection (NTHUDD) dataset. Amongst the two ensemble models used, the eXtreme Gradient Boost (XGBoost) classifier pooled predictions from the learner models. It attained a remarkable average accuracy and precision of 99% on the validation dataset. Classes such as laugh_talk and yawning were properly and separately distinguished. The ensemble technique capitalized on the strengths of various models while mitigating their weaknesses, resulting in robust and trustworthy forecasts. The findings highlight the potential of ensemble modeling to enhance driver anomaly detection systems, providing valuable insights for improving road safety. By continually monitoring driver behavior and detecting abnormalities, ADAS can provide timely warnings and interventions to prevent accidents and save human lives.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call