Abstract

Sensor-equipped driver monitoring systems are a significant advancement in vehicle safety technology, continuously monitoring drivers’ condition and behavior using various sensors. These systems identify exhaustion, inattention, or impairment indicators to improve road safety. They can measure the driver’s focus and attentiveness by tracking their head position, eye movements, and facial expressions. If detected, the system can notify the driver via visual, aural, or tactile alerts, prompting the driver to refocus or take a break. Some sophisticated systems can even take over the car’s steering in emergencies. The study focuses on integrating artificial intelligence (AI) and machine learning (ML) into Driver Monitoring Systems (DMS) to reduce accidents caused by driver exhaustion and distraction. The study discusses two primary DMS systems: Multi-Sensor Based Systems (S-HDx) and Vision-Based Systems (V-HDx). Vision-based systems examine drivers or specific driving characteristics to identify whether a driver is inattentive or sleepy. Multi-sensor-based systems identify drowsiness using physiological or non-visual factors. The study evaluated AI and ML algorithms for fatigue and distraction detection in drivers. Algorithms used in vision-based systems include single-shot detection, MobileNetV2, feature pyramid networks, and convolutional neural networks. Multi-sensor-based systems use algorithms such as SVM, CNN, XGBoost, and decision trees. Vision-based systems are recommended for DMS development due to their user-friendliness and non-intrusive nature. Future research could independently examine these techniques and algorithms to create a more successful DMS.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call