Abstract

General purpose artificial intelligence (GPAI) is a form of advanced AI system that includes the recently introduced ChatGPT. GPAI is known for its capacity to understand and emulate human responses, and potentially offers an opportunity to reduce human error when conducting tasks that involve analysis, judgement, and reasoning. To support officers to do this, the police presently use a range of decision-making support tools, one of which is called THRIVE (Threat, Harm, Risk, Investigation, Vulnerability, and Engagement). THRIVE is designed to provide police practitioners with a model to improve their identification and response to vulnerability. Despite the existence of such decision models, a 2020 meta-analysis of police cases resulting in death or serious injury identified contributory failures that included poor risk identification, risk management, failure to adhere to evidentiary processes, poor criminal investigations, and inadequate police engagement with victims, including the level of care and assistance provided (Allnock, et al, 2020). Importantly, this report outlined human error as being a major underpinning factor of the failures.Although GPAI offers an opportunity to improve analysis, judgement, and reasoning, such systems have not yet been tested in policing, a field where any reduction in human error, particularly in the assessment of threat, harm, risk, and vulnerability can potentially save lives. This study is the first attempt to do this by using the chain-of-thought prompt methodology to test the GPAI ChatGPT (3.5 vs 4) in a controlled environment using 30 life-like police scenarios, crafted, and analyzed by expert practitioners. In doing so, we identify that ChatGPT 4 significantly outperforms its 3.5 predecessor, indicating that GPAI presents considerable opportunity in policing. However, systems that use this technology require extensive directional prompting to ensure outputs that can be considered accurate, and therefore, potentially safe to utilize in an operational setting. The article concludes by discussing how practitioners and researchers can further refine police related chain-of-thought prompts or use application programming interfaces (APIs) to improve responses provided by such GPAI.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.