Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

EU’s AI Act challenge: balance innovation and consumer protection

Members of the EU Parliament have agreed on a first draft for regulating the use of AI. The AI ​​Act is now taking the next procedural step to be negotiated and worked out with individual member states. In the end, there should be an EU-wide body of law to regulate the use of AI technologies, such as ChatGPT.

Essentially, the AI ​​Act is about categorizing AI systems into specific risk classes ranging from minimal, to systems with high risks, and those that should be banned altogether. And when it comes to AI systems making consequential decisions for people, especially high standards should be applied, particularly to the transparency of data a particular AI was trained for its decision-making, and how the algorithms work to ultimately make decisions. In this way, EU politicians want to ensure that these AI applications function securely and reliably, and don’t violate fundamental human rights.

Until a final set of rules is in place, though, there will be an inevitable amount of discussion within the various EU bodies as there is no consensus. Italy, for instance, has recently taken a tougher stance and banned Open AI’s generative AI tool ChatGPT due to a lack of age controls for use and possible copyright infringement in the training data. In the meantime, however, Italian authorities have allowed ChatGPT use again under certain conditions.

Other EU countries followed the initiative of the Italian data protection authorities as well. Germany raised that ChatGPT should be banned if it can be proven the tool violates applicable data protection rules.

Too much regulation hampers innovation

While consumer advocates are calling for strict rules to protect citizen rights, business representatives warn that overly strict regulation of technology could lead to more innovation slow down. According to advocates of a less strict interpretation of the AI ​​Act, the EU could fall behind in an important future-oriented industry.

In an open letter, representatives of the Large Scale Artificial Intelligence Open Network (LAION eV) called on EU politicians to moderate AI regulation to proceed. The intention to introduce AI supervision is welcomed, it says, but such oversight must be carefully calibrated to protect research and development, and maintain Europe’s competitiveness in the field of AI. Signatories include Bernhard Schölkopf, director at the Max Planck Institute for Intelligent Systems in Tübingen, and Antonio Krüger, head of the German Research Center for Artificial Intelligence (DFKI).

LAION demands that open-source AI models in particular shouldn’t be over-regulated. Open-source systems in particular allow more transparency and security when it comes to the use of AI. In addition, open-source AI would prevent a few corporations from controlling and dominating the technology. In this way, moderate regulation could also help advance Europe’s digital sovereignty.

Too little regulation weakens consumer rights

On the other hand, the Federation of German Consumer Organizations (VZBV) calls for more rights for consumers. According to a statement by consumer advocates, consumer decisions will in future be increasingly influenced by AI-based recommendation systems, and in order to reduce the risks of generative AI, the planned European AI Act should ensure strong consumer rights and the possibility of independent risk assessment.

“The risk that AI systems lead to false or manipulative purchase recommendations, ratings and consumer information is high,” said Ramona Pop, board member of VZBV. “The Artificial intelligence is not always as intelligent as the name suggests. It must be ensured that consumers are adequately protected against manipulation and deception, for example, through AI-controlled recommendation systems. Independent scientists must be given access to the systems to assess risks and functionality. We also need enforceable individual rights of those affected against AI operators.” The VZBV also add that people must be given the right to correction and deletion if systems such as ChatGPT cause disadvantages due to reputational damage, and that the AI ​​Act must ensure AI applications comply with European laws and correspond to European values.

Self-assessment by manufacturers is not enough

Although the Technical Inspection Association (TÜV) basically welcomes groups in the EU Parliament to agree on a common position for the AI ​​Act, it sees further potential for improvement. “A clear legal basis is needed to protect people from the negative consequences of the technology, and at the same time, to promote the use of AI in business,” said Joachim Bühler, MD of TÜV.

Bühler says it must be ensured that specifications are also observed, particularly with regard to transparency of algorithms. However, an independent review is only for a small part of AI ​​systems with high risk intended. “Most critical AI applications such as facial recognition, recruiting software or credit checks should continue to be allowed to be launched on the market with a pure manufacturer’s self-declaration,” said Bühler. In addition, the classification as a high-risk application should be based in part on a self-assessment by the providers. “Misjudgments are inevitable,” he adds.

According to TÜV, it would be better to have all high-risk AI systems tested independently before launch to ensure the applications meet security requirements. “This is especially true when AI applications are used in critical areas such as medicine, vehicles, energy infrastructure, or in certain machines,” said Bühler.

AI should serve, not manipulate

While discussions about AI regulation are in full swing, at a meeting in Takasaki, Japan, at the end of April, the G7 digital ministers spoke out in favor of accompanying the rapid development of AI with clear international rules and standards according to a statement by the Federal Ministry for Digital Affairs and Transport (BMDV).

“We in the G7 agree that when it comes to regulating AI, we must act quickly,” said Volker Wissing, Germany’s Minister of Transport and Digital Infrastructure. “Generative AI has immense potential to increase our productivity and make our lives better. It’s all the more important that the large democracies lead the way and accompany its development with clever rules that protect people from abuse and manipulation. Artificial Intelligence should serve us, not manipulate.”

But it’s questionable whether it will happen as quickly as Wissing would like, seeing as the AI ​​Act has been in the works in Brussels since April 2021. After the agreement in the EU Parliament, trialogue negotiations between the Council, Parliament, and Commission could begin in the summer of 2023. It’s anyone’s guess when a final set of rules can be put in place and converted into applicable law. It’s also questionable if technological development of AI will outpace attempts at AI regulation by then.

Artificial Intelligence, Regulation
Read More from This Article: EU’s AI Act challenge: balance innovation and consumer protection
Source: News

Category: NewsMay 22, 2023
Tags: art

Post navigation

PreviousPrevious post:5 C-suite bridges every IT leader must buildNextNext post:Allianz ditches mainframe for scale and innovation

Related posts

휴먼컨설팅그룹, HR 솔루션 ‘휴넬’ 업그레이드 발표
May 9, 2025
Epicor expands AI offerings, launches new green initiative
May 9, 2025
MS도 합류··· 구글의 A2A 프로토콜, AI 에이전트 분야의 공용어 될까?
May 9, 2025
오픈AI, 아시아 4국에 데이터 레지던시 도입··· 한국 기업 데이터는 한국 서버에 저장
May 9, 2025
SAS supercharges Viya platform with AI agents, copilots, and synthetic data tools
May 8, 2025
IBM aims to set industry standard for enterprise AI with ITBench SaaS launch
May 8, 2025
Recent Posts
  • 휴먼컨설팅그룹, HR 솔루션 ‘휴넬’ 업그레이드 발표
  • Epicor expands AI offerings, launches new green initiative
  • MS도 합류··· 구글의 A2A 프로토콜, AI 에이전트 분야의 공용어 될까?
  • 오픈AI, 아시아 4국에 데이터 레지던시 도입··· 한국 기업 데이터는 한국 서버에 저장
  • SAS supercharges Viya platform with AI agents, copilots, and synthetic data tools
Recent Comments
    Archives
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.