Tech optimism collides with public skepticism over FRT, AI in policing

As facial recognition and AI technologies surge in adoption across U.S. law enforcement, evidence of improper use, bias, and lack of accountability underscores an urgent need for oversight despite the political momentum favoring deregulation and corporate expansion.
Despite the techno-optimism radiating from industry leaders and the Trump administration, the public mood is increasingly skeptical. The repeal of President Biden’s Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence executive order – which imposed federal guardrails on AI – and the empowerment of Elon Musk’s Department of Government Efficiency (DOGE) with sweeping access to sensitive government data have only fueled this unease.
Public opinion, supported by expert analysis, shows a widening chasm between AI’s promise and its performance, especially in policing contexts where false identifications, surveillance overreach, and algorithmic discrimination are not just theoretical risks, but are documented realities.
Facial recognition technology, which has become a centerpiece of law enforcement’s AI toolkit, is increasingly known not for its precision, but for its disparities. Numerous academic and governmental studies have documented racial and gender biases in these systems.
Biometric testing by the National Institute of Standards and Technology (NIST) has found that the majority of facial recognition algorithms are “more likely to misidentify people of color, women and the elderly because their faces tend to appear less frequently in data used to train the algorithms,” though the most accurate algorithms show very low differentials in the Institute’s latest testing. Of the seven people wrongfully arrested in the U.S. based on false facial recognition matches and subsequently cleared of all charges, six are black.
According to a comprehensive review by the Pew Research Center in April, 55 percent of both AI experts and the public expressed high concern over bias in AI decision-making, including in facial recognition systems. This concern is rooted in the hard evidence that facial recognition algorithms consistently perform less accurately when identifying individuals who are not white, male, and middle-aged.
The consequences are not abstract. In criminal justice, a false identification can lead to wrongful arrest, detention, or worse. Yet, under the Trump administration’s deregulation push, including proposed federal legislation that would block state-level AI rules for a decade, the deployment of these tools continues with minimal accountability.
Despite the growing alarm, some tech executives like OpenAI’s Sam Altman have recently reversed course, downplaying the need for regulation after previously warning of AI’s risks. This inconsistency, coupled with massive federal contracts and opaque deployment practices, erodes public trust in both corporate actors and government regulators.
What’s striking is how bipartisan the concern has become. According to the Pew survey, only 17 percent of Americans believe AI will have a positive impact on the U.S. over the next two decades, while 51 percent express more concern than excitement about its expanding role. These numbers represent a significant shift from earlier years and a rare area of consensus between liberal and conservative constituencies.
Roughly 55 percent of U.S. adults and 57 percent of AI experts say they want more control over how AI is used in their lives, signaling a shared sense of powerlessness as AI seeps into everyday functions. This is especially relevant in law enforcement settings, where the public often has no meaningful way to opt out of being surveilled, analyzed, or flagged by an algorithm.
The divide is not only between the public and policymakers, but also between public and private sector technologists. For instance, 60 percent of AI experts working at universities say they have little or no confidence that companies will responsibly develop AI, compared to just 39 percent of those in private firms, a revealing indicator of how profit motives may cloud ethical oversight.
Similarly, the Brookings Institution recently said that its “review of the literature reveals that public opinion on AI is both multifaceted and dynamic. Overall, the U.S. and U.K. publics tend to be more concerned than optimistic about AI’s impacts, though many hold mixed or even inconsistent views. For instance, more people worry about AI’s effects on overall employment than on their own livelihoods; and while there is broad support for AI regulation, people trust neither tech companies nor governments to implement it effectively on their own.”
“As we argue, understanding public attitudes on AI serves multiple crucial functions,” the Brookings Institution said. “It helps AI developers align products with societal expectations, enables civil society to advocate effectively, and allows policymakers to craft regulations that reflect public values rather than merely technical or commercial imperatives.”
“However,” Brookings noted, “to realize these goals, we need better mechanisms to study and understand the public’s evolving views,” adding that “public sentiment toward AI appears to lean more negative than positive in Western countries, with many surveys in the U.S. and U.K. showing more people expressing concern than excitement about AI’s impacts, though with some important subtleties.”
Bias in law enforcement AI systems is not simply a product of technical error; it reflects systemic underrepresentation and skewed priorities in AI design. According to the Pew survey, only 44 percent of AI experts believe women’s perspectives are adequately accounted for in AI development. The numbers drop even further for racial and ethnic minorities. Just 27 percent and 25 percent say the perspectives of Black and Hispanic communities, respectively, are well represented in AI systems.
These disparities are not mere oversights; they are embedded in how data is collected, labeled, and deployed. AI systems trained on datasets that disproportionately reflect the experiences of white men are likely to fail when applied across a diverse population. This is particularly dangerous in law enforcement, where some facial recognition systems might misidentify Black individuals at a much higher rate, increasing the risk of wrongful interactions with police.
Experts interviewed by Pew described the underrepresentation of marginalized communities in AI as both a technical flaw and a societal failing. One expert noted that because AI is largely trained on Internet data sourced from wealthy Western nations, the models replicate the structural biases embedded in those societies. Another expert observed that efforts to improve workforce diversity critical for improving model fairness are being quietly abandoned as companies face political pushback and economic pressures.
Meanwhile, the Trump administration has taken the opposite approach of restraint. Its dismantling of Biden’s executive order and the rise of DOGE have led to unprecedented AI deployments across federal agencies without transparency or accountability. As the Pew data shows, 62 percent of the public and 53 percent of AI experts lack confidence in the government’s ability to effectively regulate AI. A similar majority is skeptical of industry self-regulation.
With congressional Republicans backing a moratorium on state AI regulation and pushing legislation that would prevent enforcement for ten years, the very mechanisms needed to address bias and ensure accuracy are being systematically dismantled. This legal preemption could allow flawed and discriminatory AI systems to proliferate without recourse for those harmed.
It is within this vacuum that law enforcement agencies continue to deploy facial recognition technology, bolstered by federal dollars and corporate partnerships. The result is a system that uses untested or biased AI on real people, without meaningful public oversight and in environments like criminal justice where the stakes could not be higher.
Tom Wheeler predicted in his book, Techlash: Who Makes the Rules in the Digital Gilded Age?, that the collision of concentrated tech power, unchecked innovation, and public harm will produce political recoil. This backlash is already visible in polling data, expert warnings, and state-level legislative actions seeking to curb biometric surveillance.
Public frustration is growing not only with the government’s inaction, but also with industry deflections. Efforts to present AI as neutral or inherently fair are faltering in the face of real-world evidence. The more that AI is shown to make discriminatory decisions, the more demands arise for democratic oversight, public audits, and regulation with teeth.
Despite these concerns, though, political leaders aligned with the tech sector appear unwilling to acknowledge the depth of the problem. This dismissive posture may prove short-sighted. As history shows, unchecked technological excess inevitably leads to regulatory correction when public harm becomes too visible to ignore.
The deployment of AI in law enforcement is perhaps the most vivid illustration of how quickly unregulated technologies can undermine civil liberties. When faulty use of facial recognition to a wrongful arrest the burden of error is not abstract, it is carried by real people disproportionately from marginalized communities.
To reverse this trajectory, public trust must be restored through systemic reforms. This means independent audits of law enforcement AI tools, community oversight boards, mandatory bias testing, and an insistence that federal funding for AI be tied to transparent and accountable practices.
Without these changes, the legitimacy of AI in public safety will remain in question. The potential for technological advancement will be overshadowed by fear, suspicion, and harm, and the promise of AI in its ability to improve lives and increase fairness will be squandered not because of what it is, but because of how it was used.
Article Topics
AI | biometric bias | biometrics | facial recognition | false arrest | legislation | police | regulation | responsible AI | United States
Comments