Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

EU guidelines on AI use met with massive criticism

With the General Purpose AI Code of Practice (GPAI Code of Practice), the EU has published its first code of conduct for regulating general AI. It is intended to simplify compliance with the EU AI Act.

The guidelines will enter into force on Aug. 2, 2025. The EU intends to implement them in practice starting in 2026. Those guidelines, however, are not without controversy and have been criticized by lobby groups, CEOs and CIOs, and NGOs.

The Code of Practice

The Code of Practice consists of three chapters: Transparency, Copyright, and Safety and Security.

  • The Transparency chapter provides a user-friendly template for documentation. It is intended to enable providers to easily document the information required to comply with the AI Act’s obligation on model providers to ensure sufficient transparency (Article 53 of the AI Act).
  • The Copyright chapter offers providers practical solutions to comply with the AI Act’s obligation to develop a strategy to comply with EU copyright law (Article 53 of the AI Act).
  • The Safety and Security section outlines concrete, state-of-the-art practices for addressing systemic risks, i.e., risks posed by the most advanced AI models. Providers can rely on this chapter to meet the AI Act’s obligations for providers of general-purpose AI models with systemic risks (Article 55 of the AI Act). It applies only to providers of general-purpose AI (GPAI) models with systemic risk.

Criticism from Bitkom

German digital association Bitkom is still relatively diplomatic in its criticism of the GPAI Code of Practice. The association sees it as an opportunity to create legal certainty for the development of AI in Europe. Furthermore, it has been simplified compared to initial drafts and is more closely aligned with the legal text, making it easier for companies to apply.

Bitkom

Susanne Dehmel, member of the Bitkom management board, welcomes the guidelines, but sees some critical points.

Bitkom

“The Code of Practice must not become a brake on Europe’s AI position,” warns Susanne Dehmel, member of Bitkom’s management board. However, for the AI Act to be truly implemented in practice, Dehmel adds, “very comprehensive but vaguely worded audit requirements must be improved and the bureaucratic burden significantly reduced.”

Bitkom is critical of the tightened requirement for open risk identification for very powerful AI models.

What EU CEOs say

More than 45 top managers also offered a clear message in an open letter to the EU. They warn that the EU is losing itself in the complexity of regulating artificial intelligence — and thus risking its own competitiveness. The regulations are unclear in some areas, and contradictory in others.

The managers are calling for the implementation of the EU AI Act to be postponed by two years. The letter was initiated by the lobby group EU AI Champions Initiative, which represents around 110 EU companies. Signatories include top executives at Mercedes-Benz, Lufthansa, Philips, Celonis, Airbus, AXA, and the French BNP Paribas, to name just a few.

SAP and Siemens call for new AI Act

Siemens CEO Roland Busch and SAP CEO Christian Klein were absent, feeling the criticism didn’t go far enough. In an interview with the Frankfurter Allgemeine Zeitung (FAZ), they called for a fundamental revision of the EU AI Act, seeking a new framework that promotes innovation rather than hinders it. For Busch, the AI Act, in its current form, is “toxic to the development of digital business models.”

Busch

For Siemens CEO Roland Busch, the AI Act in its current form is toxic for digital business models.

Siemens AG

An NGO perspective

The Future Society, an NGO that sees itself as a representative of civil society, also has criticism for the new guidelines. The NGO is particularly concerned that US tech providers managed to weaken and water down key points in a closed session.

Nick Moës, executive director of The Future Society, says: “This weakened code puts European citizens and businesses at a disadvantage and misses opportunities to strengthen security and accountability worldwide. It also undermines all other stakeholders whose commitment and efforts for the common good have remained overshadowed by the influence of US Big Tech.”

Four points of criticism

The NGO is particularly critical of the following four points:

  • The AI office receives important information only after the product has been launched on the market.

Providers only share the model report with risk assessment after deployment — following the “publish first, then question” approach. This allows potentially dangerous models to reach European users unchecked. In the event of violations, the AI Office must initiate a recall — which can fuel unfounded criticism of innovation.

  • No more effective whistleblower protection.

Information from within is crucial in capital- and market-driven industries. In a world where AI companies know a lot about users, but users know little about them, internal whistleblowing is essential. The AI office must be a safe haven and offer the same standards of protection as those required by the EU Whistleblower Directive.

  • No mandatory plans for emergency scenarios.

Such protocols are standard in other high-risk areas. Damage can also spread extremely quickly in GPAI. Therefore, providers must be required to plan emergency response and damage mitigation strategies well in advance.

  • Providers have extensive decision-making power in risk management.

Through lobbying, results-based rules were introduced. Providers are now allowed to identify risks themselves, define thresholds, and conduct continuous evaluation — proving that they deserve this trust.


Read More from This Article: EU guidelines on AI use met with massive criticism
Source: News

Category: NewsJuly 16, 2025
Tags: art

Post navigation

PreviousPrevious post:운영 민첩성 높이려는 현장 산업···14인의 기술 리더가 말하는 ‘모바일 AI 에이전트’의 잠재력NextNext post:칼럼 | 불 피우는 과정과도 유사하다··· 실패 없는 조직 전환의 공식

Related posts

SaaS의 진화 방향 제시한 어도비… “핵심은 에이전트와 데이터”
April 22, 2026
데이터센터 세제 혜택, 지방정부에 수십억 달러 부담으로 돌아와
April 22, 2026
Web 2.0世代、エンジニア出身の若きITリーダーが描く「IT部門の未来像」とは—— 楽天グループ三津石 智巳氏に聞く
April 21, 2026
テック業界が女性を失い続ける5つの理由
April 21, 2026
Snowflake offers help to users and builders of AI agents
April 21, 2026
Does IT have a value problem?
April 21, 2026
Recent Posts
  • SaaS의 진화 방향 제시한 어도비… “핵심은 에이전트와 데이터”
  • 데이터센터 세제 혜택, 지방정부에 수십억 달러 부담으로 돌아와
  • テック業界が女性を失い続ける5つの理由
  • Web 2.0世代、エンジニア出身の若きITリーダーが描く「IT部門の未来像」とは—— 楽天グループ三津石 智巳氏に聞く
  • Snowflake offers help to users and builders of AI agents
Recent Comments
    Archives
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.