AI Tools in U.S. Policing Raise Concerns Over Efficacy and Bias

Concerns are mounting regarding the use of artificial intelligence (AI) tools by police departments across the United States. Critics argue that these technologies, while marketed as modern solutions, often perpetuate existing biases and may fail to effectively enhance public safety. Recent incidents reveal that AI facial-recognition systems have led to wrongful arrests, disproportionately impacting people of color.

Over recent years, numerous individuals have been arrested based on AI-generated leads, only to later discover that they were not present at the alleged crime scene. Such instances underscore the potential risks associated with integrating AI into law enforcement practices. As noted by journalist and AI technology writer Graham Lovelace, “This technology can be highly unreliable, and it can cause harm.”

According to Lovelace, the pressure on law enforcement to trust AI outputs may stem from a cultural conditioning that elevates technology to an authoritative status. The reliance on these tools often overlooks the data biases inherent within them, particularly those shaped by historical patterns of over-policing in marginalized communities.

Reinforcing Existing Inequities

The integration of AI in policing may serve to automate and reinforce systemic inequalities rather than address them. For instance, predictive policing tools like Geolitica can categorize neighborhoods as crime hotspots based on previous police activity rather than actual crime statistics. This approach can justify increased police presence in already heavily surveilled areas, further entrenching cycles of scrutiny and suspicion among residents.

Similar concerns have been raised about technologies like ShotSpotter, a gunshot detection system utilized by the New York City Police Department. A 2023 audit revealed that only 8 to 20 percent of alerts from ShotSpotter corresponded to confirmed gunfire incidents. Critics argue that despite spending approximately $54 million from 2015 to 2025 on ShotSpotter, the technology has not significantly reduced gun violence in the areas it monitors.

While a spokesperson from SoundThinking, the company behind ShotSpotter, claimed a “97 percent accuracy rate,” independent studies have presented a starkly different picture. Reports indicate that over 90 percent of alerts resulted in no evidence of gun-related crime. This discrepancy raises questions about the efficacy of these technologies and whether the resources devoted to them could be better utilized in community support initiatives.

The Human Cost of Automation

The implications of relying on AI technology for policing extend beyond mere inefficiency. Lovelace points out that once individuals are flagged by these systems, they may be subjected to ongoing surveillance and scrutiny, regardless of their innocence. “Once you’re targeted, you tend to receive a label, and all your information is taken,” he explained. This creates an environment where even those later exonerated remain within the police radar as potential suspects.

Critics, including New York City Council Member Tiffany Cában, argue that the pursuit of “public safety” through AI technologies often overlooks the necessity for genuine community-driven safety solutions. Cában highlights that the deployment of tools like ShotSpotter in low-income neighborhoods results in unnecessary police interventions, amplifying the risk of tragic encounters. “When police are sent over and over again into communities for no reason… it’s just a recipe for disaster,” she stated.

Despite the backlash, law enforcement agencies continue to adopt AI solutions, often citing efficiency and modernity as key justifications. Yet, the lack of transparency surrounding these technologies raises further concerns about their implementation. In New York City, recent legislation has mandated that the NYPD disclose its surveillance policies, but reports indicate that the department has been slow to comply and often provides vague information.

As AI technologies gain traction within law enforcement agencies, it becomes increasingly important to scrutinize their effects on civil liberties and community relations. The challenges of integrating AI into policing reveal a complex landscape where technology must be carefully evaluated against ethical considerations and the potential for reinforcing existing societal injustices.

The reliance on AI in policing may promise enhanced efficiency, but without rigorous oversight and accountability, it risks perpetuating harm to the very communities it is intended to protect.