Recent innovations in roadway construction include work zone intrusion alert (WZIA) systems that detect traffic hazards (e.g., speeding or intruding vehicles) and raise alarms (e.g., sounds, lights) using preset attributes (e.g., volume, duration) to warn human workers-on-foot. Designing alarms raised by wearable warning devices (e.g., smartwatch) for roadway workers remains an emerging area of transportation safety research. As roadway work zones begin to adopt these novel warning systems, issues relating to alarm attributes may persist. Different individuals’ alarm preferences and alarm fatigue towards repeated exposure to constant alarm attributes can lead to decreases in worker vigilance towards traffic hazards. Reinforcement learning (RL)-based controls can adjust alarm attributes in real-time to counteract these issues, ensuring consistent worker reactions. This study proposes an RL-based approach to train agents that control alarm attributes under different reward functions that prioritise different worker safety reactions (e.g., body movement, head turn). Results show that a reward function with equal weight for each type of reaction produces an alarm agent that ensures consistent safe worker reactions to traffic hazards. Findings also inform the future development of RL-based alarms (i.e., fine-tuning) to counteract the lack of safe worker reactions observed in real-world work zones.
Read full abstract