New York senate committee advances ban on police use of biometric surveillance

The New York Senate Internet and Technology Committee has taken a decisive legislative step to confront the rapidly expanding use of biometric surveillance technologies by law enforcement. Senate Bill S5609, introduced by Sen. Julia Salazar and six co-sponsors, was passed out of the committee by a 5-2 vote. It signals a growing unease among New York lawmakers over the unchecked deployment of facial recognition systems and other biometric surveillance tools across the state.
The bill now moves to the Senate Codes Committee for further consideration. If enacted, this legislation would position New York as a leader in addressing the ethical and legal challenges posed by the use of biometric surveillance technologies in law enforcement. The bill seeks to prohibit police departments and individual officers from acquiring, possessing, or using any biometric surveillance systems.
While the bill creates a blanket ban on most biometric surveillance tools, it allows for limited exceptions such as the continued use of mobile fingerprint scanners and access to the state’s DNA database, indicating a targeted approach that balances enforcement needs with privacy rights.
S5609 defines a biometric surveillance system as any automated or semi-automated process used to identify individuals based on biometric information or that generates surveillance data from such information. This includes the use of facial recognition software, iris scans, gait recognition, and other forms of algorithmic identification tied to unique human features.
The bill also would establish a Biometric Surveillance Regulation Task Force. This twelve-member body comprising representatives from law enforcement, civil rights organizations, and experts in data security and privacy would be tasked with studying the current and proposed uses of biometric surveillance systems.
The task force would evaluate the effectiveness, accuracy, and potential harms of such systems, particularly their impact on vulnerable populations. Based on their findings, the task force would recommend whether law enforcement should be permitted to use these technologies and, if so, propose a comprehensive framework for their regulation.
The push for this legislation comes amid mounting public and expert concern over the misuse and poor regulation of facial recognition technologies across the U.S. Advocates have long argued that these systems, often developed with little transparency and deployed without public consent, pose significant risks to civil liberties, particularly for minorities and marginalized communities which are disproportionately targeted by policing.
Misidentifications caused by flawed or biased algorithms have already resulted in wrongful arrests in multiple jurisdictions. In New York, there have been documented instances of police using facial recognition software to analyze sealed juvenile records or searching for suspects by uploading images of celebrities who vaguely resembled individuals of interest. These practices raise substantial due process concerns and underline the urgent need for oversight.
The bill’s proponents have expressed concerns over the lack of regulation and oversight in the use of biometric surveillance technologies, citing risks to civil rights, liberties, and due process. They highlight instances where law enforcement agencies have used facial recognition software on sealed mugshots and arrest photos of juveniles or have attempted to identify suspects by running images of celebrity lookalikes through such software.
The legislation’s damages provision would allow individuals to seek redress in court if their rights are violated under the law, and introduces a rare but necessary accountability mechanism in an area of policing that has often operated in the shadows.
Civil rights groups, including the Surveillance Technology Oversight Project (STOP), have been among the most vocal backers of the bill. Through the “Ban the Scan” campaign, STOP and its coalition partners have called for a full statewide ban on biometric surveillance not only by police, but also in schools, residential buildings, and public accommodations. They argue that facial recognition systems are inherently flawed and have already contributed to discriminatory enforcement, wrongful arrests, and chilling effects on public life.
The fight over biometric surveillance in New York is playing out in tandem with broader legislative moves to rein in algorithmic and AI-based systems. Governor Kathy Hochul recently signed Senate Bill S7543B, the Legislative Oversight of Automated Decision-Making in Government Act, which mandates human oversight of all automated decision-making systems used by state agencies, including those that could impact access to public services, civil liberties, and due process.
Under this law, agencies must conduct and submit rigorous impact assessments of any AI system before deployment and reevaluate them biennially. These assessments must also examine potential discriminatory outcomes and ensure that systems are not used without meaningful human review.
While S7543B focuses on transparency and oversight in government-run AI systems, S5609 takes a bolder stand by outlawing specific surveillance technologies whose risks, critics say, outweigh their utility. Together, these legislative actions signal a new chapter in New York’s governance of surveillance and algorithmic tools, one that focuses on privacy, equity, and public accountability.
As S5609 proceeds to the Senate Codes Committee, its fate will depend on sustained public pressure and political will. If enacted, it would not only curtail the spread of biometric surveillance in one of the country’s most populous states, but it also would serve as a model for other states weighing how to respond to the proliferation of opaque, high-risk surveillance technologies.
Article Topics
biometric identification | biometrics | criminal ID | facial recognition | New York | nypd | police | video surveillance
Comments