Abstract

An increasing number of safety departments in organizations across the U.S. are offering mobile apps that allow their local community members to report potential risks, such as hazards, suspicious events, ongoing incidents, and crimes. These "community-sourced risk'' systems are designed for the safety departments to take action to prevent or reduce the severity of situations that may harm the community. However, little is known about the actual use of such community-sourced risk systems from the perspective of both community members and the safety departments. This study is the first large-scale empirical analysis of community-sourced risk systems. More specifically, we conducted a comprehensive system log analysis of LiveSafe--a community-sourced risk system--that has been used by more than two hundred universities and colleges. Our findings revealed a mismatch between what the safety departments expected to receive and what their community members actually reported, and identified several factors (e.g., anonymity, organization, and tip type) that were associated with the safety departments' responses to their members' tips. Our findings provide design implications for chatbot-enabled community-risk systems and make practical contributions for safety organizations and practitioners to improve community engagement.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call