US law enforcement adopts AI with caution amid growing capabilities

Artificial intelligence (AI) is no longer a distant frontier for federal law enforcement agencies. It’s a rapidly maturing capability being cautiously but increasingly integrated into investigations, operational workflows, and internal training systems. From the Federal Bureau of Investigation (FBI) to the Transportation Security Administration (TSA) and the Naval Criminal Investigative Service (NCIS), agency officials are beginning to publicly discuss the ways in which AI, including machine learning, automation, and large language models, is reshaping how they approach some of their most difficult and data-intensive challenges.
This evolution was made clear at the recent Law Enforcement and Public Safety Preview Event hosted by the Bethesda, Maryland chapter of the Armed Forces Communications and Electronics Association (AFCEA) International where key agency representatives laid out both the promise and the perils of AI adoption.
For the FBI, the roots of its current AI capability trace back to the 2013 Boston Marathon bombing that tested the limits of the bureau’s technological infrastructure. Investigators were inundated with hours of public and private CCTV footage and found themselves lacking the tools to rapidly process and analyze it. Kiersten Schiliro, senior technical advisor in the FBI’s Operational Technology Division, explained that this led to the creation of a multimedia processing framework that could leverage computer vision to triage large volumes of visual data.
The system can identify license plates, extract text from images, detect specific objects, and even track faces—not for identification, but for movement analysis across large datasets. What took nearly a year to fully analyze can now be processed in about two days, thanks to these advanced tools.
“These computer vision tools are one of our most mature AI use cases,” Schiliro said. “They have really come through, and they are going to continue to evolve.” The operational implications are significant. In an age of ubiquitous surveillance, AI-powered triage tools have become essential for extracting actionable intelligence.
But the FBI’s use of AI is not limited to video analytics. The Criminal Investigative Division has also begun deploying AI to support efforts against child exploitation, a domain where the identification of victims can be especially complex. Children often don’t appear in official databases such as DMV records or the FBI’s Next Generation Identification system. In these cases, facial recognition technology powered by AI has become a vital tool. Schiliro emphasized the life-saving impact of these technologies, stating that, “This technology is being used to save victims’ lives.”
Despite their utility, these tools are used selectively and under tightly controlled circumstances. Every AI deployment by the FBI is subjected to privacy impact assessments and ethical scrutiny by the bureau’s internal AI Ethics Council. “We have been very cautious,” Schiliro noted. “We only have one chance to get these kinds of things right.” The bureau’s strategy of focusing early AI adoption on high priority use cases has helped it sidestep some of the broader institutional resistance often associated with digital transformation.
At TSA, AI adoption has taken a slightly different form, emphasizing operational efficiency and workforce enablement. Kristin Ruiz, TSA deputy assistant administrator and deputy chief information officer, highlighted how the agency is drawing on the Department of Homeland Security’s broader science and technology portfolio. Rather than relying on a massive influx of new hires or external vendors, TSA has made AI part of the agency’s upskilling agenda, training existing IT staff to use emerging tools.
A key internal initiative is the TSA Answer Engine, an AI-powered resource designed to quickly provide frontline employees with accurate answers about operational policies and procedures. By removing bottlenecks in information flow, it aims to boost agility and consistency across field offices.
TSA’s Innovation Lab, meanwhile, is where AI meets real-world application. One project demonstrated the use of virtual reality holograms paired with generative AI (such as ChatGPT) to simulate unpredictable passenger encounters for TSA officer training. The dynamic nature of AI responses introduced variability into the training scenarios, helping officers gain confidence in adapting to real-life situations. “You could run through the same scenario, but each officer might handle it differently and get a different response,” Ruiz explained. This approach has not only improved officer readiness but also underscored how AI can support experiential learning in mission-critical environments.
Outside training and operations, TSA’s use of facial recognition and biometric technologies at airports continues to expand. These tools are deeply embedded in identity verification procedures at security checkpoints and ports of entry, and the agency is working in coordination with the Departments of Justice and State to improve interagency data integration. TSA’s AI systems are already functioning at the operational edge of federal surveillance and identity resolution infrastructure, raising ongoing questions about civil liberties and transparency that have yet to be fully resolved.
For the Naval Criminal Investigative Service, AI is still in an exploration phase. The agency, which operates under the Department of the Navy and supports law enforcement and counterintelligence activities, is “starting small,” according to Richard Dunwoodie, acting executive assistant director of the Operational Technology and Cyber Innovation Directorate. NCIS has initiated limited pilot projects in both business operations and field investigations, using AI for tasks ranging from vehicle recognition to policy navigation.
Interestingly, Dunwoodie emphasized that AI has been part of NCIS’ toolkit for some time – even if it wasn’t labeled as such. From data aggregation in criminal investigations to automated monitoring during high-traffic events like Fleet Weeks or air shows, machine learning has been quietly shaping workflows. Yet, now, with broader visibility into commercial AI tools and support from the Department of Defense’s Chief Digital and Artificial Intelligence Office, NCIS is seeking to formalize and expand its AI use.
One critical issue facing NCIS – as well as other government agencies – is the uncontrolled use of commercial AI applications by staff, some of whom have turned to tools like ChatGPT to draft reports or formulate queries. “That is an issue,” Dunwoodie said, noting the risk of exposing sensitive knowledge gaps to adversarial actors. The concern points to a larger need for federal-level policy coherence around commercial AI use by government employees.
Across agencies, the consensus is clear: while AI presents opportunities to improve accuracy, speed, and scale, its implementation in federal law enforcement must be measured, ethical, and transparent. Schiliro summed it up, saying, “There are just some AI tools that do better than human reviews. But while we all want to use AI, it is not free, and we must identify a measurable outcome.”
That cost-benefit calculus of whether AI improves accuracy, efficiency, or provides novel capabilities now shapes procurement and deployment decisions across the federal law enforcement community. AI must not only solve problems better than humans can, it must also do so in a way that stands up to legal scrutiny, public accountability, and operational reliability.
Agencies remain wary of the political and legal landmines that come with AI use, particularly as civil liberties groups and lawmakers scrutinize biometric surveillance, predictive policing, and facial recognition systems. Oversight bodies have begun to demand clearer audit trails, ethical use cases, and documented accuracy rates for algorithms that make high-stakes determinations.
Despite these challenges, there is no sign that federal law enforcement will retreat from AI, especially under the Trump administration which pushing government-wide use of AI, often without adequate privacy and safety considerations, critics say. The paradigm shift is underway. From identifying child exploitation victims and analyzing surveillance video in record time, to creating immersive training for transportation officers and assessing operational readiness in the Navy’s law enforcement arm, AI is changing the landscape of federal policing.
Yet, for all the technical gains, these agencies are learning one lesson above all: it is not enough to build AI tools. They must build trust in those tools. That trust is earned not only through performance, but through the ethical, transparent, and responsible governance of every line of code that influences liberty, privacy, and justice.
Article Topics
AI | biometrics | border security | facial recognition | law enforcement | police | United States
Comments