After months of wrangling, the European Parliament has signed off on the world’s first comprehensive law to govern artificial intelligence (AI).
Members of the European Parliament (MEPs) voted 523 in favor and 46 against, with 49 abstentions, approving a text that had already been agreed in principle by the European Union’s 27 member states in December 2023.
According to the final text, the regulation aims to promote the “uptake of human-centric and trustworthy AI, while ensuring a high level of protection for health, safety, fundamental rights and environmental protection against harmful effects of artificial intelligence systems.”
Almost as an afterthought, it also “supports innovation.”
Harsh penalties
The law will apply to any companies doing business in the EU, and allows for penalties of up to 7% of global turnover or €35 million, whichever is higher, for those that don’t keep their use of AI under control.
Much of the public debate has been around the limits it imposes on the use of biometric identification systems by law enforcement, but commercial applications of AI face bans on social scoring and on the use of AI to manipulate or exploit user vulnerabilities.
The act also enshrines the right of consumers to make complaints about the inappropriate use of AI by businesses, and to receive meaningful explanations for decisions taken by an AI that affect their rights.
The final text’s own meaningful explanation defines an AI as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
Lawmakers took a risk-based approach, leading to certain banned practices in the final text. Scraping facial images from the internet or CCTV to create facial recognition databases, social scoring, and AI that manipulates human behavior or exploits people’s vulnerabilities will be prohibited.
Other areas of high-risk AI uses include critical infrastructure, education and vocational training, employment, essential services such as healthcare or banking, as well as law enforcement, migration and border management, justice, and democratic processes.
Limitations on AI in the workplace
Due to “the imbalance of power,” explicit provisions are laid down to ensure greater employee protection as well. AI systems will not be able to be used to recognize emotions in the workplace or in education.
“The intrusive nature of these systems … could lead to detrimental or unfavorable treatment of certain natural persons or whole groups thereof. Therefore, the placing on the market, putting into service, or use of AI systems intended to be used to detect the emotional state of individuals in situations related to the workplace and education should be prohibited. This prohibition should not cover AI systems placed on the market strictly for medical or safety reasons, such as systems intended for therapeutic use,” says the law.
The MEP charged with steering the law through the Parliament’s Internal Market Committee, co-rapporteur Brando Benifei, said, “Workers and unions will have to be informed of the use of artificial intelligence on them, and all content generated by AI will be clearly indicated. Finally, citizens will have the right to an explanation and to use the collective redress procedure, while deployers will be obliged to assess the impact of the AI system on the fundamental rights of the people affected.”
Big tech’s interest
However not all were pleased. The Left group of MEPs said the regulation was rushed and that the law “prioritizes the interests of Big Tech over citizens’ safety,” pointing to a lack of meaningful restrictions on what enterprises can do.
“Companies will have to self-assess whether the AI systems they place on the market are high-risk or not. The AI regulation needs profound improvements to put the interests of the citizens first, when it comes to dangerous products,” the group said.
The final Act contains measures to support innovation and small and medium-size businesses, such as regulatory sandboxes and real-world testing that will be established at the national level, and made accessible to SMBs and start-ups, to develop and train AI before its placement on the market.
Far from placating critics, this measure alarmed Left MEP Kateřina Konečná. “The regulation gives companies developing AI systems freedom to test their products, under certain conditions, in real-world settings such as on our streets or online. The regulation thus puts aside citizens’ safety and puts the interest of the mega-rich at the center,” she said.
To become EU law, the final text still needs to be formally adopted by the European Council, made up of the heads of state or government of the EU’s 27 member states. The act will then enter force 20 days after its publication in the EU’s Official Journal. However, not all its provisions will take immediate effect on that date. Bans on prohibited practices will come into force six months later; codes of practice, nine months later; general-purpose AI rules, 12 months later; and obligations for high-risk systems, 36 months later.
In the interim, a special “AI Office” will be set up to support companies to start complying with the rules before they enter into force. It has already started hiring.
Artificial Intelligence, Generative AI
Read More from This Article: European Parliament approves EU AI Act: What impact on the enterprise?
Source: News