Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

Top global and US AI regulations to look out for

Technology moves quickly and regulations always lag behind. But AI has thrown everything into overdrive, both the pace of technological change and the rate at which regulators are having to come up with new laws.

China’s AI laws were first enacted in 2023, just months after the release of ChatGPT. The EU’s AI Act was approved in 2024. And other countries also have AI laws already on the books, as do many US states. In fact, Taft Law has a list of 50 different laws in 19 states that are either already in effect or will take effect soon. And according to the International Association of Privacy Professionals (IAPP), many states are working on more AI laws, including Washington, Arizona, New Mexico, Nebraska, Massachusetts, and many others.

“States are acting at a pace that we’ve never seen,” says Cobun Zweifel-Keegan, MD of IAPP’s Wahington, DC, office. “It’s much quicker than we’ve seen with privacy.”

December’s executive order instructing the US DOJ to block states from enforcing their own AI laws is an attempt to slow this down, he says, but states are pushing back with a focus on trying to pass things now.

Meanwhile, there are also federal legislators promoting bills to regulate AI, such as for additional guardrails around high-risk systems. But legislative action, whether a federal regulatory framework or a moratorium, requires cooperation. “And right now we don’t have that,” he adds. “So the landscape will definitely get more complicated before it simplifies.”

And enterprises deploying AI also have a lot of other laws to comply with, such as data security laws and privacy laws, laws about automated decision making, and those about copyright, which apply to AI systems just like they do to earlier types of technology.

“Consumer protection laws are quite flexible, regardless of the technology in place,” he continues. “If you’re deceiving customers, it can raise legal scrutiny.”

For enterprises looking to get ahead of regulations so they don’t end up investing a lot of money and effort into systems that they’ll have to rip out when new laws go into effect, one approach is to look at laws in countries that are further along in the regulatory process, such as the EU.

But Zweifel-Keegan recommends a different philosophy to AI compliance, which is to start with a set of best practices, and then adapt them to specific laws as needed.

He recommends companies look at the NIST AI Risk Management Framework, ISO 42001, and the OECD AI Principles as good places to start.

“All of these were instrumental when we were building out our IAPP AI Governance certification,” he says. After all, it’s much easier to build to a framework rather than a law, adds Doron Goldstein, partner and US head of the data innovation, privacy, and cybersecurity practice at global law firm Withers.

“Building to a framework is much more familiar for operational teams,” he says. “Then the legal team can help you do the crosswalk to what the requirements are.”

Even when the laws seem robust and comprehensive, like the EU AI Act, it can be a mistake to invest immediately in getting to total compliance.

“Nobody’s actually doing that,” says Saskia Vermeer-de Jongh, partner and AI and digital law leader at EY. “It costs a lot of money and the legislation framework isn’t stable and finalized yet.”

For example, she says, the EU is currently working on the Digital Omnibus legislation which, if passed, will make changes to the AI Act in order to bring it into harmony with other EU regulations, such as GDPR, and reduce some administrative overhead for companies. Plus, it could delay the date when some provisions go into effect by one or two years.

But she also advises against ignoring safety and compliance altogether. And recommends companies build their own control frameworks based on something like NIST, as well as invest in AI literacy and training for their employees.

The next front in legislation

Current AI regulations focus mostly on chatbots, privacy, or the accuracy and fairness of individual decisions made by AI systems. In other words, the previous generation of gen AI.

Today, AI is all about agentic systems. Interconnected swarms of AI-powered agents, each one capable of carrying out tasks, accessing data, and interacting with other agents. Not only does it make the AI harder to monitor, but it makes it difficult to shut down, says Troy Leach, chief strategy officer at the Cloud Security Alliance.

“Every technologist I’ve talked to says it’s nearly impossible,” he says. That’s a tough situation to be in, because some AI frameworks and regulations want to see a kill switch — the ability to turn off AI functionality and revert to the previous system.

In practice, he says, the best a company might be able to do is turn off individual AI-powered functions rather than the entire AI system. And that’s a big risk. “We’re headed toward levels of catastrophe,” he says.

So the world will need new legislation to deal with how agents behave. “I think there’ll be new laws on the books,” he adds. “Probably not this year, because it takes legislators time to understand the technology and be motivated to create the rules. But by 2027, I think there’ll be laws to help curb and insulate the risks.”

Preparing for the pace of change

CIOs don’t just need to deal with the rapid evolution of AI-related regulations and the rapid pace of change of AI technologies. There’s also the constant changes in the AI models themselves. And even the same model, if it didn’t change from yesterday to today, will give different answers to the exact questions.

“With traditional software, you test it like crazy, deploy it, and you’re done,” says Seth Johnson, CTO at Cyara, a customer experience company. That’s not the case with gen AI. “You can’t just assume that if it works right today it’s going to work right tomorrow.”

That makes AI different from previous regulatory challenges, says Gartner analyst Lauren Kornutick. In the past, a company might have its compliance requirements, controls in place, and have someone come in and periodically do tests and audits. “That doesn’t work with AI,” she says. “You have to have real-time not periodic monitoring around decisions in case you get alerted to an auditing anomaly.”

Complying with AI regulations might seem like a daunting task, given all these challenges, but Kornutick says it’s not hopeless. “Don’t feel overwhelmed,” she says. “It might be hard to unpack, but organizations that have spent time on their cybersecurity, IT, and privacy control environments are already in a good place. You’re probably further along than you think you are.”

Major AI governance frameworks by region

Asia-Pacific

While most people think of the EU AI Act as the first major AI regulation, China actually beat it to the punch. Its Interim Measures for Administration of Generative AI Services went into effect in August 2023. The law then required security assessments, labeling, and content moderation, and also prohibited collecting and retaining unnecessary personally identifiable information.

In 2025, more AI-related regulations went into effect, covering security of AI models, data, protecting minors’ personal information, and requiring the registration of AI algorithms. And in 2026, major amendments to China’s cybersecurity law were enacted in January, covering AI risk assessment, security governance, and AI ethics.

China has also released draft rules on agentic AI, according to the IAPP, with standards for AI agents, as well as other AI-related technologies, expected to be issued this year. And for foreign companies offering AI services in China, there are strict data localization and content moderation requirements.

In South Korea, the AI Basic Act took effect in January this year, addressing issues of transparency and high-risk AI systems. It applies not only to local companies, but also to large foreign ones with more than one million daily users in South Korea.

In Singapore, the Model AI Governance Framework for Agentic AI was launched in January as well. It’s the first global framework specifically for agentic AI, and while it doesn’t impose binding legal obligation, it does offer an indication of where regulations could be headed.

Japan has an AI Act, too, with no penalties, that went into effect in September. Instead, the law provides a government expert body to help determine whether AI systems are safe and trustworthy, and gives evaluations, guidance, and best practices.

The Australian government released the voluntary Guidance for AI Adoption framework in October, designed to help companies mitigate the risks of AI. And most recently, India released its AI Governance Guidelines at the AI Impact Summit in February. This is a voluntary, risk-based framework designed to balance innovation with safeguards, and to pave the way for future regulations.

Europe

The EU officially adopted the landmark AI Act in May of 2024, with the first provisions going into effect that summer, and additional ones in 2025. But the most important provisions are slated for August this year.

The EU’s comprehensive approach has made it a global leader. In a Comparitech report analyzing 178 countries, the three with the strongest AI regulations globally are Denmark, France, and Greece, which layer additional protections on the AI Act. Of the 33 countries with comprehensive AI legislation, 27 are EU members, and only EU countries provide direct workplace protections such as banning emotion-recognition AI in employee monitoring.

Like GDPR, the AI Act applies to global companies, and penalties are as high as 7% of global revenues. Some applications of AI are banned outright, such as the harmful manipulation of human behavior, social scoring, and some types of facial recognition. High-risk applications of AI, such as those that affect health or employment, fall under stringent requirements. For example, companies must establish risk management systems, data governance, auditing, human oversight, and ongoing quality and security management.

There are also several other EU laws that touch on AI systems, such as GDPR. The European Parliament is currently considering new legislation, the Digital Omnibus, that will consolidate the various regulatory frameworks into a single set of policies. If approved, the new law will reduce compliance requirements and administrative burdens for businesses. More importantly, it’ll postpone some of the AI Act requirements that were scheduled to go into effect in August until 2027 or 2028.

United States

As with cybersecurity, US AI laws are currently a state-level patchwork of legislation, with no common federal law in sight. In 2024, Utah became the first state to regulate gen AI specifically, with the Utah Artificial Intelligence Policy Act. Amended in 2025, it requires disclosure if a customer interacts with an AI instead of a human.

Now, Utah legislators are debating the AI Transparency Act, which requires companies to create and publish public safety and child protection plans, conduct risk assessments, and report safety incidents. According to news reports, the White House is pressuring legislators not to approve this bill.

In Texas, the Responsible Artificial Intelligence Governance Act (TRAIGA) went into effect in January, and restricts harmful AI such as systems designed to manipulate behavior or exploit children.

In Illinois, HB 3773 went into effect in January, and requires employers to notify people affected by AI used in hiring or promotion decisions. The law also prohibits the use of AI systems that have a discriminatory effect.

In California, there are several laws on the books governing AI. AB 2013, AI Training Data Transparency, which went into effect in January, requires transparency for AI training data. SB 53, the Transparency in Frontier Artificial Intelligence Act (TFAIA), which also went into effect in January, requires AI companies to disclose safety frameworks, report incidents, and protect whistleblowers. Finally, SB 942, the California AI Transparency Act, which goes into effect in August, requires watermarking of AI-generated content.

In Colorado, SB 24-205 will be effective from June, and covers high-risk AI systems used in employment, healthcare, and financial services.

Passed in 2024, it was the first comprehensive AI law in the country, and had a risk-based approach to AI accountability. Colorado was specifically singled out in the December executive order, which instructed the US Attorney General to start challenging state AI laws.

In addition, New York state’s Responsible AI Safety and Education Act (RAISE) was signed into law in December and becomes effective by January, 2027. RAISE requires transparency around training data, safety plans, and safety incidents.


Read More from This Article: Top global and US AI regulations to look out for
Source: News

Category: NewsApril 1, 2026
Tags: art

Post navigation

PreviousPrevious post:데이터브릭스 부사장 “분석·운영 데이터 통합이 AI 성패 가른다”…플랫폼 전략 공개NextNext post:Scaling a business: A leadership guide for the rest of us

Related posts

샤오미, MIT 라이선스 ‘미모 V2.5’ 공개···장시간 실행 AI 에이전트 시장 겨냥
April 29, 2026
SAS makes AI governance the centerpiece of its agent strategy
April 29, 2026
The boardroom divide: Why cyber resilience is a cultural asset
April 28, 2026
Samsung Galaxy AI for business: Productivity meets security
April 28, 2026
Startup tackles knowledge graphs to improve AI accuracy
April 28, 2026
AI won’t fix your data problems. Data engineering will
April 28, 2026
Recent Posts
  • 샤오미, MIT 라이선스 ‘미모 V2.5’ 공개···장시간 실행 AI 에이전트 시장 겨냥
  • SAS makes AI governance the centerpiece of its agent strategy
  • The boardroom divide: Why cyber resilience is a cultural asset
  • Samsung Galaxy AI for business: Productivity meets security
  • Startup tackles knowledge graphs to improve AI accuracy
Recent Comments
    Archives
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.