Technology moves quickly and regulations always lag behind. But AI has thrown everything into overdrive, both the pace of technological change and the rate at which regulators are having to come up with new laws.
China’s AI laws were first enacted in 2023, just months after the release of ChatGPT. The EU’s AI Act was approved in 2024. And other countries also have AI laws already on the books, as do many US states. In fact, Taft Law has a list of 50 different laws in 19 states that are either already in effect or will take effect soon. And according to the International Association of Privacy Professionals (IAPP), many states are working on more AI laws, including Washington, Arizona, New Mexico, Nebraska, Massachusetts, and many others.
“States are acting at a pace that we’ve never seen,” says Cobun Zweifel-Keegan, MD of IAPP’s Wahington, DC, office. “It’s much quicker than we’ve seen with privacy.”
December’s executive order instructing the US DOJ to block states from enforcing their own AI laws is an attempt to slow this down, he says, but states are pushing back with a focus on trying to pass things now.
Meanwhile, there are also federal legislators promoting bills to regulate AI, such as for additional guardrails around high-risk systems. But legislative action, whether a federal regulatory framework or a moratorium, requires cooperation. “And right now we don’t have that,” he adds. “So the landscape will definitely get more complicated before it simplifies.”
And enterprises deploying AI also have a lot of other laws to comply with, such as data security laws and privacy laws, laws about automated decision making, and those about copyright, which apply to AI systems just like they do to earlier types of technology.
“Consumer protection laws are quite flexible, regardless of the technology in place,” he continues. “If you’re deceiving customers, it can raise legal scrutiny.”
For enterprises looking to get ahead of regulations so they don’t end up investing a lot of money and effort into systems that they’ll have to rip out when new laws go into effect, one approach is to look at laws in countries that are further along in the regulatory process, such as the EU.
But Zweifel-Keegan recommends a different philosophy to AI compliance, which is to start with a set of best practices, and then adapt them to specific laws as needed.
He recommends companies look at the NIST AI Risk Management Framework, ISO 42001, and the OECD AI Principles as good places to start.
“All of these were instrumental when we were building out our IAPP AI Governance certification,” he says. After all, it’s much easier to build to a framework rather than a law, adds Doron Goldstein, partner and US head of the data innovation, privacy, and cybersecurity practice at global law firm Withers.
“Building to a framework is much more familiar for operational teams,” he says. “Then the legal team can help you do the crosswalk to what the requirements are.”
Even when the laws seem robust and comprehensive, like the EU AI Act, it can be a mistake to invest immediately in getting to total compliance.
“Nobody’s actually doing that,” says Saskia Vermeer-de Jongh, partner and AI and digital law leader at EY. “It costs a lot of money and the legislation framework isn’t stable and finalized yet.”
For example, she says, the EU is currently working on the Digital Omnibus legislation which, if passed, will make changes to the AI Act in order to bring it into harmony with other EU regulations, such as GDPR, and reduce some administrative overhead for companies. Plus, it could delay the date when some provisions go into effect by one or two years.
But she also advises against ignoring safety and compliance altogether. And recommends companies build their own control frameworks based on something like NIST, as well as invest in AI literacy and training for their employees.
The next front in legislation
Current AI regulations focus mostly on chatbots, privacy, or the accuracy and fairness of individual decisions made by AI systems. In other words, the previous generation of gen AI.
Today, AI is all about agentic systems. Interconnected swarms of AI-powered agents, each one capable of carrying out tasks, accessing data, and interacting with other agents. Not only does it make the AI harder to monitor, but it makes it difficult to shut down, says Troy Leach, chief strategy officer at the Cloud Security Alliance.
“Every technologist I’ve talked to says it’s nearly impossible,” he says. That’s a tough situation to be in, because some AI frameworks and regulations want to see a kill switch — the ability to turn off AI functionality and revert to the previous system.
In practice, he says, the best a company might be able to do is turn off individual AI-powered functions rather than the entire AI system. And that’s a big risk. “We’re headed toward levels of catastrophe,” he says.
So the world will need new legislation to deal with how agents behave. “I think there’ll be new laws on the books,” he adds. “Probably not this year, because it takes legislators time to understand the technology and be motivated to create the rules. But by 2027, I think there’ll be laws to help curb and insulate the risks.”
Preparing for the pace of change
CIOs don’t just need to deal with the rapid evolution of AI-related regulations and the rapid pace of change of AI technologies. There’s also the constant changes in the AI models themselves. And even the same model, if it didn’t change from yesterday to today, will give different answers to the exact questions.
“With traditional software, you test it like crazy, deploy it, and you’re done,” says Seth Johnson, CTO at Cyara, a customer experience company. That’s not the case with gen AI. “You can’t just assume that if it works right today it’s going to work right tomorrow.”
That makes AI different from previous regulatory challenges, says Gartner analyst Lauren Kornutick. In the past, a company might have its compliance requirements, controls in place, and have someone come in and periodically do tests and audits. “That doesn’t work with AI,” she says. “You have to have real-time not periodic monitoring around decisions in case you get alerted to an auditing anomaly.”
Complying with AI regulations might seem like a daunting task, given all these challenges, but Kornutick says it’s not hopeless. “Don’t feel overwhelmed,” she says. “It might be hard to unpack, but organizations that have spent time on their cybersecurity, IT, and privacy control environments are already in a good place. You’re probably further along than you think you are.”
Major AI governance frameworks by region
Asia-Pacific
While most people think of the EU AI Act as the first major AI regulation, China actually beat it to the punch. Its Interim Measures for Administration of Generative AI Services went into effect in August 2023. The law then required security assessments, labeling, and content moderation, and also prohibited collecting and retaining unnecessary personally identifiable information.
In 2025, more AI-related regulations went into effect, covering security of AI models, data, protecting minors’ personal information, and requiring the registration of AI algorithms. And in 2026, major amendments to China’s cybersecurity law were enacted in January, covering AI risk assessment, security governance, and AI ethics.
China has also released draft rules on agentic AI, according to the IAPP, with standards for AI agents, as well as other AI-related technologies, expected to be issued this year. And for foreign companies offering AI services in China, there are strict data localization and content moderation requirements.
In South Korea, the AI Basic Act took effect in January this year, addressing issues of transparency and high-risk AI systems. It applies not only to local companies, but also to large foreign ones with more than one million daily users in South Korea.
In Singapore, the Model AI Governance Framework for Agentic AI was launched in January as well. It’s the first global framework specifically for agentic AI, and while it doesn’t impose binding legal obligation, it does offer an indication of where regulations could be headed.
Japan has an AI Act, too, with no penalties, that went into effect in September. Instead, the law provides a government expert body to help determine whether AI systems are safe and trustworthy, and gives evaluations, guidance, and best practices.
The Australian government released the voluntary Guidance for AI Adoption framework in October, designed to help companies mitigate the risks of AI. And most recently, India released its AI Governance Guidelines at the AI Impact Summit in February. This is a voluntary, risk-based framework designed to balance innovation with safeguards, and to pave the way for future regulations.
Europe
The EU officially adopted the landmark AI Act in May of 2024, with the first provisions going into effect that summer, and additional ones in 2025. But the most important provisions are slated for August this year.
The EU’s comprehensive approach has made it a global leader. In a Comparitech report analyzing 178 countries, the three with the strongest AI regulations globally are Denmark, France, and Greece, which layer additional protections on the AI Act. Of the 33 countries with comprehensive AI legislation, 27 are EU members, and only EU countries provide direct workplace protections such as banning emotion-recognition AI in employee monitoring.
Like GDPR, the AI Act applies to global companies, and penalties are as high as 7% of global revenues. Some applications of AI are banned outright, such as the harmful manipulation of human behavior, social scoring, and some types of facial recognition. High-risk applications of AI, such as those that affect health or employment, fall under stringent requirements. For example, companies must establish risk management systems, data governance, auditing, human oversight, and ongoing quality and security management.
There are also several other EU laws that touch on AI systems, such as GDPR. The European Parliament is currently considering new legislation, the Digital Omnibus, that will consolidate the various regulatory frameworks into a single set of policies. If approved, the new law will reduce compliance requirements and administrative burdens for businesses. More importantly, it’ll postpone some of the AI Act requirements that were scheduled to go into effect in August until 2027 or 2028.
United States
As with cybersecurity, US AI laws are currently a state-level patchwork of legislation, with no common federal law in sight. In 2024, Utah became the first state to regulate gen AI specifically, with the Utah Artificial Intelligence Policy Act. Amended in 2025, it requires disclosure if a customer interacts with an AI instead of a human.
Now, Utah legislators are debating the AI Transparency Act, which requires companies to create and publish public safety and child protection plans, conduct risk assessments, and report safety incidents. According to news reports, the White House is pressuring legislators not to approve this bill.
In Texas, the Responsible Artificial Intelligence Governance Act (TRAIGA) went into effect in January, and restricts harmful AI such as systems designed to manipulate behavior or exploit children.
In Illinois, HB 3773 went into effect in January, and requires employers to notify people affected by AI used in hiring or promotion decisions. The law also prohibits the use of AI systems that have a discriminatory effect.
In California, there are several laws on the books governing AI. AB 2013, AI Training Data Transparency, which went into effect in January, requires transparency for AI training data. SB 53, the Transparency in Frontier Artificial Intelligence Act (TFAIA), which also went into effect in January, requires AI companies to disclose safety frameworks, report incidents, and protect whistleblowers. Finally, SB 942, the California AI Transparency Act, which goes into effect in August, requires watermarking of AI-generated content.
In Colorado, SB 24-205 will be effective from June, and covers high-risk AI systems used in employment, healthcare, and financial services.
Passed in 2024, it was the first comprehensive AI law in the country, and had a risk-based approach to AI accountability. Colorado was specifically singled out in the December executive order, which instructed the US Attorney General to start challenging state AI laws.
In addition, New York state’s Responsible AI Safety and Education Act (RAISE) was signed into law in December and becomes effective by January, 2027. RAISE requires transparency around training data, safety plans, and safety incidents.
Read More from This Article: Top global and US AI regulations to look out for
Source: News

