Responsible AI is getting a lot of buzz. With policy conversations around the deregulation of AI, we’ve been led to believe that ethical practices are falling on enterprises, as they largely have since the inception of the technology. This, however, is wrong. The days of “AI washing” are coming to an end. And while we may see lags in federal oversight, that’s not the case for state and local governments.
State lawmakers across the US introduced nearly 700 AI-related bills in 2024 across 45 states. Of the bills that were introduced, 113 were ultimately enacted into law. This is a feather in the cap of true responsible, ethical AI. But it’s also a real challenge for enterprises. While piecemeal AI governance is better than nothing, it makes for an extremely complex and fragmented legal environment.
States like California, Colorado, Utah, Texas and Tennessee are blazing the trail, enacting comprehensive legislation to govern AI systems. Others, including New York, Illinois and Virginia, are advancing targeted and sector-specific regulations. While smaller states remain lightly regulated, partly because they sometimes wait to adopt legislation from larger ones, enterprises operating digitally or across state lines need to be aware of potential breaches of law.
Emerging regulatory patchwork
California’s Assembly Bill 2013 and Senate Bill 942, set to take effect in 2026, impose sweeping transparency and accountability requirements on businesses deploying AI in the private sector. Colorado’s new AI Act mandates impact assessments and oversight for “high-risk” AI systems. It’s not just blue states cracking down, either.
Utah has taken a distinctive approach with its Artificial Intelligence Policy Act, establishing state-level accountability measures and an oversight office. Tennessee’s ELVIS Act breaks new ground by protecting voice and likeness rights from generative AI misuse. In somewhat of a surprise, Texas has introduced what would be the most expansive state regulation of AI if the current version becomes law.
These laws mark a shift from abstract principles to real, legal mandates. And these are just the examples — many other states are introducing bills or forming task forces to explore stronger AI oversight. This growing body of legislation reflects increasing public concern over privacy, fairness, labor displacement and misinformation, only amplified by generative AI tools.
Regulatory uncertainty is a risk multiplier
The diversity and speed of AI regulation present formidable compliance risks for businesses. A company may deploy an AI chatbot for HR that is compliant in one state but in violation in another. Laws defining “high-risk” AI or requiring disclosures and audit trails vary not just in content and terminology, but enforcement mechanisms. This creates a legal blind spot with the potential for litigation, reputational damage or fines.
The lag between innovation and oversight heightens the chances of enterprises being caught off guard when new laws take effect. AI systems already deployed may require retroactive adjustments, audits or removal, particularly if they lack documentation on training data, bias mitigation or explainability. Reliance on third-party vendors and solutions are another liability if they’re not up to speed on evolving standards.
AI governance is not just about public sentiment…it’s about operating legally
According to research from Pew, despite varying sentiments towards AI — experts view AI as far more beneficial than US adults — similar shares of the public and experts want more control and regulation of AI. More than half of US adults (55%) and AI experts (57%) say they want more control over how it’s used. Both groups worry more that government regulation of AI will be too lax vs. too excessive.
To summarize, most would agree that more control over how AI shows up in our lives or work is necessary. Regulatory readiness signals responsible leadership, builds customer trust and reduces risk exposure. And enterprises that invest now in responsible AI practices — explainability, fairness and human oversight — will not only win public favor, but be better positioned to comply with AI legislation as it develops.
For businesses, this requires coordination from legal, compliance, data science, product teams and beyond, to work together to assess AI use cases, map applicable regulations and implement proactive governance measures. Keeping up with legislative developments — especially in high-regulation states — should be part of every digital organization’s risk management framework.
AI beyond borders
The National Institute of Standards and Technology (NIST) AI Risk Management Framework is a jumping off point for broader regulatory infrastructure in the making. However, until federal standards emerge, the burden remains on enterprises to navigate the state-by-state maze.
For businesses operating beyond state lines, it would be smart to adopt the highest common denominator as a baseline. For instance, aligning with California or Colorado’s forthcoming requirements could future-proof AI deployments against stricter future laws elsewhere.
Global developments — like the EU AI Act and national AI laws passed in China, Canada, South Korea and Brazil — are raising the compliance bar for international companies. In other words, those doing business globally are going through the same growing pains as GDPR back in 2018. But it’s a much-needed catalyst for being proactive about integrating privacy, safety and transparency into AI development from the outset.
Getting a handle on it
Keeping up with changing legislation is a job — literally. Companies are increasingly hiring chief AI officers and AI governance executives and teams to manage new developments. It’s not just a nice thing to do or a way to build trust or a competitive differentiator. It’s the law, and it impacts all companies using AI. Fortunately for those who can’t hire an internal gatekeeper, the best defense is using the very AI in question to help.
New tools and entire companies have emerged for the sole purpose of helping organizations stay up-to-date with new AI-related legislation and regulations. The technology can create frameworks that automatically deploy test cases for the AI itself, automatically updated about new regulations. This can be used to test models and for regular monitoring in production to provide ongoing evidence that AI is operating fairly.
Advanced tools can also be used to establish governance policies and processes and to simplify the process to remain compliant with federal, state and local regulatory standards. Adopting responsible AI practices from the get-go is another way to stay ahead, but getting AI systems to help can make sure nothing is overlooked from a compliance perspective.
AI can’t thrive in a regulatory vacuum. Newly passed and proposed legislation reflects a greater understanding of AI’s societal risk factors and a need to mitigate them. For enterprises, the cost of ignoring these developments is high. Staying ahead of the regulatory curve isn’t just about building resilient, trusted and future-proof AI systems — it’s about operating legally and being allowed to operate such systems at all.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
Read More from This Article: AI regulation in the US is heating up, but keeping up will become harder
Source: News