The purpose of this study is to examine the controversies surrounding the algorithmic predictive policing strategy adopted by the New York Police Department. NYPD is very active in pursuing algorithmic predictive policing strategies, but it is facing a variety of criticisms, including: First, personal information collection activities for predictive security violate the ideology of the Fourth Amendment to the U.S. Constitution, which prohibits invasion of privacy. Second, as the police rely on instructions generated by algorithmic predictions, police operations are becoming dehumanized. Third, the police exercise their police authority intensively against dangerous people or dangerous areas, and there is a risk that these people will eventually be negatively stigmatized and become criminals. Fourth, the police unilaterally exercise police power based on algorithmic predictions, rather than a two-way police service that reflects the needs of citizens or the community. Fifth, the efficiency of algorithmic predictive policing has not been verified, and the problem of contamination of used big data is being raised. Sixth, it is impossible to confirm whether the data provided by the police to private companies is opaque and secure. Seventh, there is the irrationality of Police power being replaced by private companies' for-profit businesses. Eighth, the New York Police Department is subject to monitoring of algorithmic predictive policing through the Local Law in Relation to Automated Decision Systems Used by Agencies and Public Oversight of Surveillance Technology(POST) Act, but the problems remain unresolved. Ninth, the New York Police Department continues to engage in legal disputes with human rights watchdog groups such as the Brennen Center over the non-disclosure of algorithmic predictive security information, losing most cases, raising questions about the legality of big data collection and management.
Read full abstract