Abstract

Distracted pedestrians, akin to their distracted driver counterparts, are an increasingly dangerous threat and precursors to pedestrian accidents in urban communities, often resulting in grave injuries and fatalities. Mitigating such hazards to pedestrian safety requires the employment of pedestrian safety systems and applications that are effective in detecting them. Designing effective pedestrian safety frameworks is possible with the availability of sophisticated mobile and wearable devices that are equipped with high-precision on-board sensors capable of capturing fine-grained user movements and context, especially distracted activities. However, the key technical challenge in the design of such systems is accurate recognition of distractions with minimal resources in real-time, given the memory, computation, and communication limitations of these devices. Several recently published papers detect pedestrian activities by leveraging on complex activity recognition frameworks using mobile and wearable sensor data. The primary focus of these efforts, however, was on achieving high-detection accuracy, and therefore most designs are either resource intensive and unsuitable for implementation on mainstream mobile devices or computationally slow and not useful for real-time pedestrian safety applications, require specialized hardware and less likely to be adopted by most users. In the quest for a pedestrian safety system, we design an efficient, and real-time pedestrian distraction detection technique that overcomes some of the shortcomings (of existing techniques). We demonstrate the practicality of the proposed technique by implementing prototypes on commercially-available mobile and wearable devices and evaluating them using data collected from human subject participants in realistic pedestrian experiments. By means of these evaluations, we show that our technique achieves a favorable balance between computational efficiency, detection accuracy, and energy consumption compared to some other techniques in this paper.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call