Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

What IT leaders need to know about the EU AI Act

The European Parliament voted in mid-March to approve the EU AI Act, the world’s first major piece of legislation that would regulate the use and deployment of artificial intelligence applications.

The vote isn’t the final passage, but it indicates that many CIOs at organizations using AI tools will have new regulations to comply with, as the law will apply both to organizations developing AIs and those simply deploying them. The law will also extend beyond the borders of the EU member nations, as any company interacting with residents of the EU will be subject to the regulations.

AI legislation has been years in the making, with the EU first proposing the legislation in April 2021. Many leading voices have called for some type of AI regulation, Elon Musk and Sam Altman of OpenAI among them, but the EU AI Act also has its detractors.

The law will create new mandates for organizations to validate, monitor, and audit the entire AI lifecycle, says Kjell Carlsson, head of AI strategy at Domino Data Lab, a data science and AI company.

“With the passing of the EU AI act, the scariest thing about AI is now, unequivocally, AI regulation itself,” Carlsson says. “Between the astronomical fines, sweeping scope, and unclear definitions, every organization operating in the EU now runs a potentially lethal risk in their AI-, ML-, and analytics-driven activities.”

Carlsson fears the law will have a “profound cooling effect” on AI research and adoption. The multimillion-dollar fines in the legislation will translate directly into fewer AI-based products and services, he predicts. Fines can be up to €35 million (US $37.4 million) or 7% of a company’s annual revenue, whichever amount is greater.

Still, organizations can’t just ignore the AI revolution to avoid the regulations, Carlsson adds. “Using these technologies is not optional, and every organization must increase their use of AI in order to survive and thrive,” he says.

What’s in the legislation?

The EU AI Act is broad, comprising 458 pages, but it covers three major areas:

Banned uses of AI: The regulations ban AI applications that threaten human rights, including biometric categorization systems based on sensitive physical characteristics. The untargeted scraping of facial images from the internet or security footage to create facial recognition databases is also prohibited.

The law would also ban AI systems that monitor employee or student emotions, conduct social scoring, or engage in predictive policing based on a person’s profile or characteristics. Also prohibited are AI systems that manipulate human behavior or exploit people’s vulnerabilities.

Obligations for high-risk AI systems: Organizations usingAI tools that create significant potential harm to health, safety, human rights, the environment, democracy, and the rule of law are also regulated. They must conduct risk assessments, take steps to reduce risk, maintain use logs, comply with transparency requirements, and ensure human oversight. EU residents will have a right to submit complaints about high-risk AI systems and receive explanations about decisions.

Examples of high-risk systems include AIs used in critical infrastructure, education and vocational training, employment decisions, healthcare, banking, and those that could influence elections. Some law enforcement and border control agency uses of AI will be regulated.

Transparency requirements: General-purpose AI systems, and the AI models they are based on, must comply with transparency requirements, such as publishing detailed summaries of the content used for training. The most powerful general-purpose AIs will face additional regulations, and they must perform model evaluations, assess and mitigate risks, and report on incidents.

In addition, deepfakes — artificial or manipulated images, audio, and video content — will be required to be clearly labelled.

Transparency and rulemaking concerns

Lawyers and other observers of the EU AI Act point to a couple major issues that could trip up CIOs.

First, the transparency rules could be difficult to comply with, particularly for organizations that don’t have extensive documentation about their AI tools or don’t have a good handle on their internal data. The requirements to monitor AI development and use will add governance obligations for companies using both high-risk and general-purpose AIs.

Secondly, although parts of the EU AI Act wouldn’t go into effect until two years after the final passage, many of the details affecting regulations have yet to be written. In some cases, regulators don’t have to finalize the rules until six months before the law goes into effect.

The transparency and monitoring requirements will be a new experience for some organizations, Domino Data Lab’s Carlsson says.

“Most companies today face a learning curve when it comes to capabilities for governing, monitoring, and managing the AI lifecycle,” he says. “Except for the most advanced AI companies or in heavily regulated industries like financial services and pharma, governance often stops with the data.”

The law will require high-risk AI systems to provide extensive documentation about their AI operations and use of data, adds Julie Myers Wood, CEO of Guidepost Solutions, a compliance and cybersecurity vendor. Many companies will need to increase their investments in data management and application development processes, and the law may even require the redesign of some AI systems to make them more interpretable and explainable, she adds.

“Compliance could be particularly challenging for companies that rely heavily on AI models that inherently lack transparency, such as deep neural networks, if these issues were not carefully addressed during the development or acquisition lifecycle,” she says.

While the law isn’t technically retroactive, the transparency rules will apply to AI systems that have already been developed or deployed, notes Nichole Sterling, a partner at the BakerHostetler law firm focused on data privacy and cross-border legal issues.

Companies using AI should begin documenting their processes and AI data use and examine their data management practices now, adds James Sherer, a partner at BakerHostetler and co-leader of its emerging technology and AI teams.

“If you have very good practices, you probably have the basics of a lot of these things in place,” he says. “If you don’t, there’s going to be a big documentation push, and you’re probably not going to be able to prove half of it.”

The unknown unknowns

Meanwhile, EU regulators have up to 18 months from the final passage of the legislation to write many of the specific definitions and rules in the law. The proposed law has a lot of regulations to comply with, and some of the regulations are likely to be nuanced, Sherer says.

“There’s a lot of moving parts, and there’s a lot of boxes to be checked,” Sherer says. “There are a lot of holes that need to be filled in by regulatory input.”

Observers of the law are waiting on guidelines for assessing high-risk AI systems, examples of uses cases, and possible codes of conduct, Sterling says. “That would be really helpful to have,” she says.

Finally, the legislation focuses more on the effect of AI systems than on the systems themselves, which could make compliance difficult, given the rapid advancements in AI and its unpredictability, Sherer says.

“You may have an idea of what a system is going to do,” he says. “If its effects start to change, then you’ve got a lot of these requirements that would trail behind that.”

Artificial Intelligence, Compliance, Government, Regulation


Read More from This Article: What IT leaders need to know about the EU AI Act
Source: News

Category: NewsApril 30, 2024
Tags: art

Post navigation

PreviousPrevious post:Tableau further democratizes analytics with AI-fueled featuresNextNext post:How AI is reshaping Saudi Aramco’s oil exploration and underwater operations strategy

Related posts

휴먼컨설팅그룹, HR 솔루션 ‘휴넬’ 업그레이드 발표
May 9, 2025
Epicor expands AI offerings, launches new green initiative
May 9, 2025
MS도 합류··· 구글의 A2A 프로토콜, AI 에이전트 분야의 공용어 될까?
May 9, 2025
오픈AI, 아시아 4국에 데이터 레지던시 도입··· 한국 기업 데이터는 한국 서버에 저장
May 9, 2025
SAS supercharges Viya platform with AI agents, copilots, and synthetic data tools
May 8, 2025
IBM aims to set industry standard for enterprise AI with ITBench SaaS launch
May 8, 2025
Recent Posts
  • 휴먼컨설팅그룹, HR 솔루션 ‘휴넬’ 업그레이드 발표
  • Epicor expands AI offerings, launches new green initiative
  • MS도 합류··· 구글의 A2A 프로토콜, AI 에이전트 분야의 공용어 될까?
  • 오픈AI, 아시아 4국에 데이터 레지던시 도입··· 한국 기업 데이터는 한국 서버에 저장
  • SAS supercharges Viya platform with AI agents, copilots, and synthetic data tools
Recent Comments
    Archives
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.