The EU has emerged as the first major power to introduce a comprehensive set of laws to govern the use of AI after it agreed on a landmark deal for the EU AI bill. The bill will turn into an EU law once it is approved by the European Parliament at a vote scheduled for early 2024.
The provisional agreement defines the rules for the governance of AI in biometric surveillance and how to regulate general-purpose AI systems (GPAIS), such as ChatGPT. “This regulation aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high risk AI, while boosting innovation and making Europe a leader in the field,” said the press release issued by European Parliament.
Other countries, including the US and the UK, are also working on regulations to govern AI so they can benefit from the technology while mitigating the risks. At least 28 countries recently signed the Bletchley Declaration, establishing common opportunities and risks posed by AI.
Impact on EU and non-EU businesses
While the law will apply to companies operating in the EU, it is unclear if companies that are based outside the EU and might have customers in the region will also have to abide by the new rules. “The non-EU firms will have to decide over time whether it’s worth operating in the EU. If the rules are too strict or costly, non-EU firms will just make the decision to ignore the EU market,” said Ray Wang, principal analyst and founder of Constellation Research.
Even industry associations are questioning the impact of the AI Act on businesses. “The new requirements – on top of other sweeping new laws like the Data Act – will take a lot of resources for companies to comply with, resources that will be spent on lawyers instead of hiring AI engineers. We particularly worry about the many SME software companies not used to product legislation – this will be uncharted territory for them,” trade association, DigitalEurope, said in a statement.
Non-compliance with the regulations may result in fines ranging from $8 million (€7.5 million) or 1.5% of the turnover to $37.6 million (€35 million) or 7% of the organization’s global turnover, depending on the nature of the infringement and the size of the company. Importantly, the legislation provides consumers with the right to complain and receive explanations.
“The EU continues its history of playing the role of regulating progress and protecting its citizens from foreign AI giants. They need to get the ratification of all the member states, but it is highly likely to pass. We consider this to be a work in progress as AI advancements will outpace policy,” Wang said.
A query sent to the European Parliament remained unanswered at the time of publishing this article.
Impact on AI innovation
Some believe that the EU AI Act is a case of over-regulation, which may stifle tech innovation. “This is definitely another case of over-regulation designed to slow down innovative tech companies with little teeth in enforcement,” Wang said. “If the EU would be serious or any government about AI ethics, they would start by declaring that every human has property rights to their genetic data and PII. Any entity that would like to use that data must seek permission and value exchange.”
However, the lawmakers argued at the press conference that the regulation is, in fact, “pro-innovation” and will allow the organizations to use AI confidently.
“It is a myth that the AI Act will hamper innovation. This is just not true. On the contrary, it will foster the uptake of AI. We promote innovation through regulatory sandboxes, real-world testing and open sources [excluding open source AI systems from transparency requirement]. We have a balanced rule that ensures legal certainty, which is extremely important for businesses… [EU] is the only continent where you know what you have to do. You know what you don’t have to do. And it’s extremely important for innovation and business,” Thierry Breton, European Commissioner for internal market, said at the press conference.
EU was the first region to start working on legislation on AI in 2021. Despite the early lead and the recent provisional deal, the legislation is not likely to come into effect before 2025, and, this might not be enough as AI is advancing rapidly.
“As lawmakers work through the remaining technical details, we encourage EU policymakers to retain this focus on risk and accountability, rather than algorithms,” IBM said in a statement on the EU AI Act.
Restrictions on GPAIS
The growing popularity and adoption of GPAIS, such as ChatGPT, has pushed the authorities to accelerate the introduction of the governance model. The EU AI Act stipulates that GPAIS will need to abide by stringent transparency requirements, like technical documentation, compliance with the EU copyright law, and sharing of detailed summaries about the content used for training.
The GPAIS models with systemic risks will be required to meet stricter obligations. “If these models meet certain criteria they will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the Commission on serious incidents, ensure cybersecurity and report on their energy efficiency,” the press release said.
On the other hand, the high-risk AI systems, which can potentially harm the safety, health, and fundamental rights of the citizens, will need to go through a fundamental rights impact assessment. This is also a requirement for the AI systems for the insurance and banking sectors.
Significantly, the models based on open source are exempt from the transparency requirement. “So for the models, we differentiate between having a low risk of systemic risk and a high level of systemic risk. The lower tier, which is where all the European companies are right now, has very light requirements. And open source models are also excluded from this transparency, except for the foundational fundamental rights impact assessment,” Carme Artigas Brugal, State Secretary for Digitalization and Artificial Intelligence of Spain, said at a press conference.
Ban on biometric surveillance
The lawmakers have banned the use of biometric categorization based on “sensitive characteristics,” which include political and religious beliefs, among others. They also prohibit untargeted scrapping of facial images from the internet or CCTV footage, social scoring, and AI systems that manipulate human behavior or exploit the vulnerabilities of people.
Significantly, lawmakers have introduced exceptions to the use of biometric identification systems to law enforcement agencies, subject to judicial authorization and for defined lists of crime. Real-time use of biometric-based identification systems will be permitted for a limited time and location and comes with several conditions.
The authorities will be able to use the systems for targeted searches of victims, prevention of terrorist threats, and identification of a person suspected of having committed a crime mentioned in the regulation.
Artificial Intelligence, Regulation
Read More from This Article: Concerns remain even as the EU reaches a landmark deal to govern AI
Source: News