Sixteen big users and creators of artificial intelligence (AI) technology — including heavy hitters such as Microsoft, Amazon, Google, Meta, and OpenAI — have signed up to the Frontier AI Safety Commitments, a new set of safety guidelines and development outcomes for the technology.
They revealed the commitments on Tuesday, as two days of talks on AI were set to begin at the AI Seoul Summit in Korea.
The signatories agreed to publish — if they have not done so already — safety frameworks outlining on how they will measure the risks of their respective AI models. The risks might include the potential for misuse of the model by a bad actor, for instance.
The frameworks also set out a safety handbrake that can minimize the risks that organizations take when using AI technology, outlining when severe, unmitigated risks associated with the technology would be “deemed intolerable” and what companies will do to ensure those thresholds are not passed.
The companies have also committed to not develop or deploy an AI model or system at all if its risks cannot be held to a certain threshold. These thresholds will be defined by trusted actors — which may include government entities — before being released ahead of the AI Action Summit to be held in France early next year.
The other 11 companies that signed the Frontier AI Safety Commitments are: Anthropic, Cohere, G42, IBM, Inflection AI, Mistral AI, Naver, Samsung Electronics, Technology Innovation Institute, xAI, and Zhipu.ai.
Comparison to other safety commitments
The agreement follows a similar landmark agreement between the EU, the US, China, and other countries to work together on AI safety. The so-called Bletchley Declaration, made at the AI Safety Summit in the UK in November, established a shared understanding of the opportunities and risks posed by frontier AI and recognized the need for governments to work together to meet the most significant challenges associated with the technology.
One of the differences between the Frontier AI Safety Commitments and the Bletchley Declaration is obvious: the new agreement is at the organizational level, while the Bletchley Declaration was made by governments, thus suggesting more regulatory potential associated with future decision-making around AI.
The Frontier commitments also enables “organizations to determine their own thresholds for risk,” which may not potentially be as effective as setting them at a higher level as another of these attempts to regulate AI safety — the EU AI Act — does, noted Maria Koskinen, AI Policy Manager at AI governance technology vendor Saidot.
“The EU AI Act regulates risk management of general-purpose AI models with systemic risks, [which]…are unique to these high-impact general-purpose models,” she said.
So where the Frontier AI Safety Commitments leaves it to the organizations to define their own thresholds, the EU AI Act “provides guidance on this with the introduction of the definition of ‘systemic risk,’” Koskinen noted.
“This gives more certainty not only to organizations implementing these commitments but also to those adopting AI solutions and individuals being impacted by these models,” she said.
Guiding CIOs to AI safety
Still, both Koskinen and others agreed that the commitments are yet another step in the right direction to ensure that any AI technology developed by those who have agreed to them will have a certain level of safety baked in. It also provides precedent for any future AI models and associated innovations to come as the technology evolves, she said.
“While the commitments are voluntary, with no enforcement … they still set a precedent for other AI organizations to follow,” Koskinen noted. “I expect we’ll see others follow suit, benefitting the entire AI community by creating a safer and more secure ecosystem for AI innovation.”
The commitments also can help guide CIOs in their understanding of both AI-related risk and risk-management actions as they deploy the technology going forward, noted Pareekh Jain, CEO of Pareekh Consulting.
“Safety is an essential component of ethical AI, and as part of this association, companies will publish safety frameworks and actively work on active risk management,” he said. “So, in a way, it is a step towards ethical AI.”
Read More from This Article: Big tech companies commit to new safety practices for AI
Source: News