Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

EU AI Act: one year on, new measures enter effect

The EU AI Act, the European Union’s artificial intelligence regulation, had worldwide impact when it entered force on August 1, 2024. But we’re only halfway through this act: another wave of its provisions take effect this weekend, with more to come next year.

The act imposes prohibitions or conditions on AI systems depending on whether their impact is considered unacceptable, high, limited or minimal, with a gradual rollout schedule for the rules. The prohibitions on unacceptable risk AI systems have been in effect since February 2, 2025. On August 2, 2025, measures relating to governance standards, general purpose AI (GPAI) models, and the sanctions regime, among others, will be activated. Certain exemptions mean that full implementation of the law won’t happen until 2030.

A quiet beginning

While some of the act’s measures — including the ban on unacceptably risky AI systems, the opening of the European AI office, and the provision of guidelines for GPAI models are already technically in effect, they have been largely invisible, according to Víctor Rodríguez, senior lecturer in the Department of Artificial Intelligence at the Polytechnic University of Madrid (UPM). “Since sanctions don’t start until August 2, 2025, we haven’t seen examples with ‘media impact.’ We will see them soon,” he said. Rodriguez alluded to other factors with an impact on regulation, such as the arrival of a new Trump administration in the White House, which may have impacted how the EU’s regulation is perceived. “The European Commission wanted to repeat the success of the Data Protection Regulation (GDPR), which served as a beacon for a world that largely tried to replicate the regulation, but this so-called ‘Brussels effect’ may not happen this time,” he says, citing tech giants’ division over the EU’s General Purpose AI Code of Practice that Google agreed to sign but Meta refused.

Roger Segarra, a partner in the IT and intellectual property department at Osborne Clarke, is already seeing the impact of the act. “Some prohibited practices, such as the use of real-time remote biometric identification systems by public authorities for surveillance purposes, have had a deterrent effect since their inception, even before their effective implementation,” he said.

He highlighted a certain disparity between companies in their assimilation of the regulation: While large companies “have voluntarily taken early action to adjust,” among smaller companies the situation is different. “For SMBs, a certain climate of tension has been generated due to the bureaucratic and economic burden involved in complying with the regulations and — for the moment — the scarce practical guidelines that exist,” he said. “Likewise, the level of implementation is uneven among the different member states,” he said, highlighting Spain’s role as the first EU country to create its national artificial intelligence supervisory authority, the AESIA.

For others, though, the EU AI Act isn’t moving fast enough.

“The first year has shown that AI is advancing faster than the legislative capacity to regulate it,” said Arnau Roca, managing partner at Overstand Intelligence, a consultancy specializing in AI. Roca said the regulation is “a necessary and positive first step towards regulating the use of artificial intelligence,” but saw challenges in its deployment due to the rapid evolution of the technology: “On a daily basis we see at Overstand Intelligence how some project requests border on the boundary between ethical and abusive.”

In this regard, he spoke of the potential risks to humanity posed by tools such as real-time image recognition tools: “What makes AI law obsolete is not only the technology itself, but also the human capacity to quickly imagine applications that are not yet contemplated in the current regulation.”

Rodriguez identified other “disruptive technological novelties” such as agents, real-time deep fakes or multimodal models that combine text, image and audio. “And yet, what really produces vertigo is to look towards those applications of AI that fall outside the law or are simply not affected by it, such as the actual deployment of military applications on battlefields or the use of these technologies in platforms such as Palantir for global mass surveillance.”

Segarra highlighted the “establishment of absolute prohibitions respectful of fundamental rights and vulnerable people” as especially relevant, anticipating the proliferation of AI systems “invasive of people’s rights and freedoms”, such as subliminal manipulative methods or social scoring.

Room for improvement

The accelerated evolution of intelligent technologies means that the AI Act has to be thought about for continuous adaptation, as the three experts acknowledge. Rodriguez foresaw adjustments “in a couple of years”, which he said will be easier thanks to the very design of the regulation, which defines general principles and obligations but leaves the technical details to harmonized international standards. “Modifying an international standard, even with all the bureaucratic apparatus, is more agile than reopening the entire legislative process; and the discussions take place in a more technical than political environment.” This will surely facilitate its efficiency, “but that which falls outside its scope will remain a challenge.”

The law needs dynamic mechanisms to adapt to unforeseen scenarios, said Roca. “An adaptable and agile regulation that allows constant updates without losing legal certainty is key.”

But, said Segarra, political and social pressure could lead to the AI Act being formally revised before the end of the five-year period defined in the text itself. For him, the a posteriori and continuous review of the models is one of the “hottest aspects” of the regulation. He spoke of “the need to include a higher level of control once AI systems have been launched in the market, including the performance of fundamental rights impact assessments on a systematic basis.”

Some aspects of the law can be improved, said Rodriguez including what it has to say about open source. “The obligations imposed by the regulation affect AI developers and users, also in open source projects. The definition of ‘provider’ is confusing and the impact on projects such as Llama is uncertain. How can we regulate modifications of open source models by third parties?” he asked. He also points to the administrative and compliance burden for small players in the market, something Segarra agreed with. “The imposition of regulatory burdens for SMBs requires the design of guidelines that allow the adoption of simplified procedures and the subsidized use of regulatory compliance tools.”

There are exemptions, Rodriguez acknowledged, “but there is still no clarity on when they apply in practice.” The UPM professor added criticism from certain spheres of “the excessive accumulation of power by Brussels,” with the need to register high-risk systems in a centralized database as a problem. “Companies fear the leaking of trade secrets, some member states believe that registration should be at state level and not European, SMBs fear bureaucracy, others excessive accumulation of power,” he said.

Despite having been in force for a year, the AI Act faces significant challenges if it is to keep up with the times.


Read More from This Article: EU AI Act: one year on, new measures enter effect
Source: News

Category: NewsAugust 1, 2025
Tags: art

Post navigation

PreviousPrevious post:Business process observability: An IT solution to a business challengeNextNext post:Better dashboarding with Dynatrace Davis AI: Instant meaningful insights

Related posts

Germany’s sovereign AI hope changes hands
April 24, 2026
What Google’s “unified stack” pitch at Cloud Next ‘26 really means for CIOs
April 24, 2026
CIO ForwardTech & ThreatScape Spain radiografía las tendencias tecnológicas y de ciberseguridad en 2026
April 24, 2026
The AI architecture decision CIOs delay too long — and pay for later
April 24, 2026
La relación entre el CIO y el CISO, a examen: ¿por fin se ha roto la frontera entre innovación y seguridad?
April 24, 2026
CIOs struggle to find clarity in their organizations’ AI strategies
April 24, 2026
Recent Posts
  • Germany’s sovereign AI hope changes hands
  • What Google’s “unified stack” pitch at Cloud Next ‘26 really means for CIOs
  • CIO ForwardTech & ThreatScape Spain radiografía las tendencias tecnológicas y de ciberseguridad en 2026
  • The AI architecture decision CIOs delay too long — and pay for later
  • La relación entre el CIO y el CISO, a examen: ¿por fin se ha roto la frontera entre innovación y seguridad?
Recent Comments
    Archives
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.