Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

Coming AI regulations have IT leaders worried about hefty compliance fines

More than seven in 10 IT leaders are worried about their organizations’ ability to keep up with regulatory requirements as they deploy generative AI, with many concerned about a potential patchwork of regulations on the way.

More than 70% of IT leaders named regulatory compliance as one of their top three challenges related to gen AI deployment, according to a recent survey from Gartner. Less than a quarter of those IT leaders are very confident that their organizations can manage security and governance issues, including regulatory compliance, when using gen AI, the survey says.

IT leaders appear to be worried about complying with the potential for a growing number of AI regulations, including some that may conflict with one another, says Lydia Clougherty Jones, a senior director analyst at Gartner.

“The number of legal nuances, especially for a global organization, can be overwhelming, because the frameworks that are being announced by the different countries vary widely,” she says.

Gartner predicts that AI regulatory violations will create a 30% increase in legal disputes for tech companies by 2028. By mid-2026, new categories of illegal AI-informed decision-making will cost more than $10 billion in remediation costs across AI vendors and users, the analyst firm also projects.

Just the start

Government efforts to regulate AI are likely in their infancy, with the EU AI Act, which went into effect in August 2024, one of the first major pieces of legislation targeting the use of AI.

While the US Congress has so far taken a hands-off approach, a handful of US states have passed AI regulations, with the 2024 Colorado AI Act requiring AI users to maintain risk management programs and conduct impact assessments and requiring both vendors and users to protect consumers from algorithmic discrimination.

Texas has also passed its own AI law, which goes into effect in January 2026. The Texas Responsible Artificial Intelligence Governance Act (TRAIGA) requires government entities to inform individuals when they are interacting with an AI. The law also prohibits using AI to manipulate human behavior, such as inciting self-harm, or engaging in illegal activities.

The Texas law includes civil penalties of up to $200,000 per violation or $40,000 per day for ongoing violations.

Then, in late September, California Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act, which requires large AI developers to publish descriptions on how they have incorporated national standards, international standards, and industry-consensus best practices into their AI frameworks.

The California law, which also goes into effect in January 2026, also mandates that AI companies report critical safety incidents, including cyberattacks, within 15 days, and provides provisions to protect whistleblowers who report violations of the law.

Companies that fail to comply with the disclosure and reporting requirements face fines of up to $1 million per violation.

California IT regulations have an outsize impact on global practices because the state’s population of about 39 million gives it a huge number of potential AI customers protected under the law.  California’s population is larger than more than 135 countries.

California also is the AI capital of the world, containing the headquarters of 32 of the top 50 AI companies worldwide, including OpenAI, Databricks, Anthropic, and Perplexity AI. All AI providers doing business in California will be subject to the regulations.

CIOs on the forefront

With US states and more countries potentially passing AI regulations, CIOs are understandably nervous about compliance as they deploy the technology, says Dion Hinchcliffe, vice president and practice lead for digital leadership and CIOs, at market intelligence firm Futurum Equities.

“The CIO is on the hook to make it actually work, so they’re the ones really paying very close attention to what is possible,” he says. “They’re asking, ‘How accurate are these things? How much can data be trusted?’”

While some AI regulatory and governance compliance solutions exist, some CIOs fear that those tools won’t keep up with the ever-changing regulatory and AI functionality landscape, Hinchcliffe says.

“It’s not clear that we have tools that will constantly and reliably manage the governance and the regulatory compliance issues, and it’ll maybe get worse, because regulations haven’t even arrived yet,” he says.

AI regulatory compliance will be especially difficult because of the nature of the technology, he adds. “AI is so slippery,” Hinchcliffe says. “The technology is not deterministic; it’s probabilistic. AI works to solve all these problems that traditionally coded systems can’t because the coders never thought about that scenario.”

Tina Joros, chairwoman of the Electronic Health Record Association AI Task Force, also sees concerns over compliance because of a fragmented regulatory landscape. The various regulations being passed could widen an already large digital divide between large health systems and their smaller and rural counterparts that are struggling to keep pace with AI adoption, she says.

“The various laws being enacted by states like California, Colorado, and Texas are creating a regulatory maze that’s challenging for health IT leaders and could have a chilling effect on the future development and use of generative AI,” she adds.

Even bills that don’t make it into law require careful analysis, because they could shape future regulatory expectations, Joros adds.

“Confusion also arises because the relevant definitions included in those laws and regulations, such as ‘developer,’ ‘deployer,’ and ‘high risk,’ are frequently different, resulting in a level of industry uncertainty,” she says. “This understandably leads many software developers to sometimes pause or second-guess projects, as developers and healthcare providers want to ensure the tools they’re building now are compliant in the future.”

James Thomas, chief AI officer at contract software provider ContractPodAi, agrees that the inconsistency and overlap between AI regulations creates problems.

“For global enterprises, that fragmentation alone creates operational headaches — not because they’re unwilling to comply, but because each regulation defines concepts like transparency, usage, explainability, and accountability in slightly different ways,” he says. “What works in North America doesn’t always work across the EU.”

Look to governance tools

Thomas recommends that organizations adopt a suite of governance controls and systems as they deploy AI. In many cases, a major problem is that AI adoption has been driven by individual employees using personal productivity tools, creating a fragmented deployment approach.

“While powerful for specific tasks, these tools were never designed for the complexities of regulated, enterprise-wide deployment,” he says. “They lack centralized governance, operate in silos, and make it nearly impossible to ensure consistency, track data provenance, or manage risk at scale.”

As IT leaders struggle with regulatory compliance, Gartner also recommends that the focus on training AI models to self-correct, create rigorous use-case review procedures, increase model testing and sandboxing, and deploy content moderation techniques such as buttons to report abuse AI warning labels.

IT leaders need to be able to defend their AI results, requiring a deep understanding of how the models work, says Gartner’s Clougherty Jones. In certain risk scenarios, this may mean using an external auditor to test the AI.

“You have to defend the data, you have to defend the model development, the model behavior, and then you have to defend the output,” she says. “A lot of times we use internal systems to audit output, but if something’s really high risk, why not get a neutral party to be able to audit it? If you’re defending the model and you’re the one who did the testing yourself, that’s defensible only so far.”


Read More from This Article: Coming AI regulations have IT leaders worried about hefty compliance fines
Source: News

Category: NewsOctober 16, 2025
Tags: art

Post navigation

PreviousPrevious post:ING abre un centro tecnológico en Madrid para impulsar su transformación digital globalNextNext post:Aon COO Mindy Simon on winning with better data

Related posts

Why AI projects stall and how CIOs can respond
April 23, 2026
Why AI governance without guardrails is theater
April 23, 2026
Smart factories are here — but is your team ready to use them?
April 23, 2026
How the EU’s NIS2 directive is changing how CIOs think about digital infrastructure
April 23, 2026
Data debt will cripple your AI strategy if left unaddressed
April 23, 2026
LIV Golf engages fans with agentic AI
April 23, 2026
Recent Posts
  • Why AI projects stall and how CIOs can respond
  • Why AI governance without guardrails is theater
  • Smart factories are here — but is your team ready to use them?
  • How the EU’s NIS2 directive is changing how CIOs think about digital infrastructure
  • Data debt will cripple your AI strategy if left unaddressed
Recent Comments
    Archives
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.