FB pixel

Texas charts independent path on AI regulation; awaits governor’s okay

Texas charts independent path on AI regulation; awaits governor’s okay
 

The Texas Responsible Artificial Intelligence Governance Act (TRAIGA) passed by the Texas legislature and awaiting Governor Greg Abbott’s signature would regulate the development and use of AI in both the public and private sectors. It is expected to take effect on January 1, 2026, if signed by Abbott.

TRAIGA would mark Texas’s most comprehensive attempt yet to establish oversight over AI technologies, building on national debates about the role of machine learning in everyday life. The bill’s passage comes amid a wave of state-led efforts such as those in Colorado, Utah, and California to impose limits on certain uses of AI, even as federal lawmakers consider preemptive action that could strip states of that authority.

However, passage of the bill by the solidly Republican Texas legislature bucks the Trump administration and Republican-controlled Congress’ positioning to be the sole arbitrator of everything AI, an indication that there’s a lot more daylight between federal and state ideology on this matter. Enough daylight possibly to cause U.S. senators to nix their House colleagues’ legislation to prohibit states from regulating AI themselves.

Originally proposed as a sweeping risk-based framework modeled in part on the European Union’s AI Act, the final version of TRAIGA reflects months of political negotiation and industry lobbying that narrowed its scope considerably. The version now awaiting Governor Abbott’s decision no longer presents a tiered model of AI system risks or obligations. Instead, it focuses on prohibiting certain explicitly harmful uses of AI, reinforcing current civil rights protections under federal and state law, and establishing safeguards against biometric misuse and behavior manipulation.

While limited in scope compared to earlier drafts, TRAIGA still introduces notable obligations for developers, deployers, and government users of AI technologies in Texas.

“I don’t think yet we really need to worry about a Terminator scenario of killer robots,” Kevin Welch, president of EFF-Austin, a consumer advocacy group that advocates for the protection of digital rights, told The Texas Tribune. “I would say it’s important to focus on real harms, which is one thing I do really like about this bill. It focuses on real harms and not hypothetical sci-fi scenarios.”

David Dunmoyer, campaign director for the Texas Public Policy Foundation’s Better Tech for Tomorrow program, added that the bill is about “getting … the right guardrails and the right regulatory system in place that ensures we’re not just preserving humanity, but advancing it and furthering it.”

Hodan Omaar, a senior policy manager at the Center for Data Innovation, said in January that TRAIGA’s “heavy-handed approach risks creating more problems than it solves, prioritizing bureaucratic hurdles over meaningful progress in fairness and accountability.”

Texas state Rep. Giovanni Capriglione said TRAIGA “represents a pivot toward harmonizing AI policy within existing privacy and consumer protection frameworks.”

If Abbott signs the bill into law or allows the constitutionally allotted 20-day period to expire without a veto, TRAIGA will become effective on January 1. From that date forward, the law would apply to any individual or entity that develops, operates, or deploys AI systems within Texas, or that offers AI-powered products or services to Texas residents. This would encompass AI developers headquartered in other states or countries but with products available in the Texas market, customer-facing businesses employing AI tools for operations, and contractors providing AI services to Texas government agencies.

However, the law’s future may not be entirely within Texas’s control. As of mid-June, language in a pending federal budget reconciliation bill includes a proposed 10-year moratorium on new state AI laws. Should that federal measure pass with its current language intact, laws like TRAIGA would be blocked from going into effect, creating a conflict between state sovereignty and national regulatory uniformity. Until that question is resolved in Washington, TRAIGA’s viability rests on Governor Abbott’s signature and the outcome of the federal legislative process.

As it stands, TRAIGA includes a core set of provisions intended to curb what lawmakers characterized as the most egregious and immediate threats posed by unregulated AI. The bill specifically prohibits the development or deployment of AI systems that intentionally discriminate against individuals based on protected characteristics under state and federal law.

In a clarifying statement that aligns with the Trump administration’s stance, TRAIGA makes clear that a claim of discrimination must be based on intent rather than disparate impact alone, reaffirming legal standards already in place for human-led decision-making.

The legislation also targets a broader category of AI misuse involving behavioral manipulation. Under TRAIGA, it would be unlawful to use AI systems to impair an individual’s constitutional rights, encourage self-harm, promote violence, or facilitate criminal behavior.

AI-generated sexually explicit content, particularly that involving chatbots, also falls under the bill’s scope of prohibited development and deployment. These provisions appear tailored to respond to recent concerns about the rise of deepfakes, AI-driven grooming and exploitation, and other psychological or reputational harms that lack clear legal boundaries.

TRAIGA’s scope is especially pointed when it comes to the government’s use of AI. The bill bans Texas state and local agencies from using AI systems to implement “social scoring” – the ranking of individuals based on behavioral or demographic characteristics in ways that result in the denial of opportunities or access to services. This provision echoes criticism of China’s social credit system and is likely aimed at preempting any similar practices from emerging in public administration in the U.S.

One of TRAIGA’s most significant and concrete additions is its update to Texas’s Biometric Identifier Act. The bill states explicitly that biometric data, such as fingerprints, voiceprints, retina or iris scans, and facial geometry, cannot be harvested from publicly available online media unless that media was made public by the subject themselves.

In essence, the law seeks to close a loophole often exploited by companies scraping publicly available content to train facial recognition algorithms or build surveillance tools. While this provision introduces a meaningful constraint on biometric data collection, it leaves the term “publicly available” undefined, potentially leaving room for future legal challenges and enforcement ambiguities.

Some exceptions are built into the law’s biometric restrictions. Financial institutions using voiceprints, companies that use biometric data solely to train AI systems without deploying those systems for identification, and AI systems employed in security or fraud prevention contexts are not bound by the consent requirements.

Even so, government agencies remain under stricter limitations. They would be prohibited from developing or using AI to identify individuals using biometric data collected without consent if such collection would violate constitutional or statutory rights.

To accommodate the fast-moving nature of AI development, TRAIGA includes an “AI Sandbox” provision under which businesses may test AI systems in a regulatory safe harbor for up to 36 months. During this time, companies must provide quarterly updates on performance, risk mitigation, and user feedback to regulators.

This design reflects the dual goal of encouraging innovation while protecting the public from emerging harms that often become visible only after deployment. By creating this controlled environment, Texas lawmakers appear to be balancing the interests of its robust tech sector with growing demands for public accountability.

Enforcement of the TRAIGA would fall solely to the Texas Attorney General, who is empowered to bring civil enforcement actions against violators regardless of the AI system’s origin or headquarters. Individuals affected by AI misuse would not be granted a private right of action under TRAIGA.

Instead, the public could submit complaints to the Attorney General’s office, which would investigate and take appropriate action if warranted. The penalties authorized by the bill include civil fines of up to $200,000 for violations that cannot be cured, as well as daily penalties of up to $40,000 for continued non-compliance.

However, the law includes a safe harbor for companies that adopt compliance frameworks aligned with federal National Institute of Standards and Technology guidelines and offers reduced penalties for violations discovered through red-team testing or self-audits.

Oversight responsibilities would also be handed to the newly established Texas Artificial Intelligence Council operating under the Department of Information Resources. The Council is charged with ensuring AI systems used by Texas agencies are ethical, lawful, and in the public’s interest.

Though its powers are advisory rather than binding, the Council may issue reports to the legislature recommending future reforms and will play a key role in educating state and local government staff on appropriate uses of AI. It is unclear, however, how influential the Council’s guidance will be in practice, particularly given its limited enforcement capabilities.

In its final form, TRAIGA is neither a sweeping AI governance framework nor a laissez-faire endorsement of unregulated development. Instead, it represents a carefully negotiated attempt to address immediate risks while leaving room for growth and future policy development.

The bill’s most stringent constraints apply to government agencies, while private companies are allowed broader latitude, provided they stay within clearly marked boundaries related to civil liberties, biometric privacy, and behavioral manipulation.

Related Posts

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Biometrics providers and systems evolve or get left behind

Biometrics are allowing people to prove who they are, speeding journeys through airports, and enabling anonymous online proof of age,…

 

Findynet funding development of six digital wallet solutions

Finnish public-private cooperative Findynet has announced it will award 60,000 euros (US$69,200) to six digital wallet vendors to help translate…

 

Patchwork of age check, online safety legislation grows across US

As the U.S. waits for the Supreme Court’s opinion on the Texas case of Paxton v. Free Speech Coalition, which…

 

AVPA laud findings from age assurance tech trial

The Age Verification Providers Association (AVPA), and several of its members, have welcomed the publication of preliminary findings from the…

 

Sri Lanka to launch govt API policies and guidelines

Sri Lanka’s government, in the wake of its digital economy drive, is gearing up to release application programming interface (API)…

 

Netherlands’ asylum seeker ID cards from Idemia use vertical ICAO format

The Netherlands will introduce new identity documents for asylum seekers Idemia Smart Identity, compliant with the ICAO specification for vertical…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events