The UK government has introduced an AI assurance platform, offering British businesses a centralized resource for guidance on identifying and managing potential risks associated with AI, as part of efforts to build trust in AI systems.
About 524 companies now make up the UK’s AI sector, supporting more than 12,000 jobs and generating over $1.3 billion in revenue, the UK government said. Official projections estimate the market could grow to $8.4 billion by 2035.
“The platform brings together guidance and new practical resources which sets out clear steps such as how businesses can carry out impact assessments and evaluations, and reviewing data used in AI systems to check for bias, ensuring trust in AI as it’s used in day-to-day operations,” the government said in a statement.
The government also plans to introduce measures to support businesses, particularly small and medium-sized enterprises (SMEs), in adopting responsible AI management practices through a new self-assessment tool.
This tool aims to help companies make informed decisions as they develop and implement AI technologies. A public consultation launched alongside the tool will collect industry feedback to enhance its effectiveness.
An attempt to manage AI
The launch comes as enterprises and regulators globally grapple with how best to manage AI, particularly around concerns like private data usage.
For businesses, the new platform can provide a streamlined method for addressing AI risks and ensuring compliance.
“By establishing clear regulatory frameworks, the UK’s AI assurance platform can foster trust and accountability, which are critical for compliance with laws such as GDPR and sector-specific regulations,” said Prabhu Ram, VP of Industry Intelligence Group at CyberMedia Research.
However, while the platform is marketed as a tool to build trust in artificial intelligence, its primary aim is to offer businesses a framework for evaluating AI in line with government standards, according to Hyoun Park, CEO and chief analyst at Amalgam Insights.
More importantly, concerns remain over the platform’s effectiveness in its current form.
“The platform is still fairly rudimentary, with plans for an essential toolkit that has yet to be fully developed,” Park said. “This assessment relies on human responses rather than direct integration with the AI itself, and the scale used by the assessment tool is vague, offering only binary yes/no options or responses that are difficult to quantify, such as ‘some.’”
Challenges to overcome
The tool could face implementation challenges due to opinion-based factors within its assessment. On the other hand, businesses using this assurance tool may be able to meet governance requirements with relatively minimal effort.
“A bigger challenge will be bias assessments, as bias is actually a part of what enables AI to provide more context and detailed answers,” Park said. “Every AI has a bias, and the notion that bias can be eliminated is both a myth and potentially dangerous, as it may lead users to mistakenly believe that an AI is unbiased.”
It would make more sense to pursue a direction where companies would actively document the existing devices, as well as provide guidance on the intended biases that should be in a specific model, Park added.
Meanwhile, the measures could also introduce fresh challenges for businesses, particularly SMEs.
By adding layers of compliance requirements – including risk assessments, data audits, and bias checks – the platform risks creating an additional regulatory burden that may stretch the resources of smaller companies.
“SMEs with limited resources will need to overcome challenges, including resource constraints and a lack of expertise, in integrating AI assurance practices into their existing workflows,” Ram said.
Read More from This Article: UK launches platform to help businesses manage AI risks, build trust
Source: News