The UK government could improve productivity through widespread and systematized uptake of generative AI, but only if it takes steps to build its expertise and come up with an adoption strategy, a new study has found.
Generative AI offers the possibility of “large-scale productivity gains” for UK government workers, but the government lacks an overarching strategy for AI adoption, according to the study, “Use of artificial intelligence in government.” It was prepared by the National Audit Office, an independent agency of the UK’s parliament that functions as a public spending watchdog.
Various parts of the government have begun to explore AI use independently, however, leading to scattered uptake and a lack of broad-based vision for the technology’s functionality in government. The Department for Science, Innovation and Technology — along with its specialist offices like the Artificial Intelligence Policy Directorate — bears primary responsibility for implementing a general strategy for AI, as most government departments (70%, according to the study) are piloting and planning AI use cases already.
The Cabinet Office, home of the government’s Central Digital and Data Office (CDDO), must take a leadership role, according to Meg Hillier, a Member of Parliament and chair of the Committee of Public Accounts, which oversees the NAO.
“Government has encouraged the use of AI for several years and there is existing AI activity and exploration across government, so the Cabinet office needs to bring together this insight and learning and share it across departments,” she said in a statement. “[AI] provides huge opportunities to transform public services, but to maximize these the government will need to implement and adopt AI at scale across the public sector.”
AI has already found its way into many government departments, with the most common use of AI identified by the NAO being to support operational decision-making through automated document analysis, digital assistant functionality, and image recognition.
Adopting and scaling AI
Some of the NAO’s advice to government could also apply to enterprises, especially those with a similar aversion to risk and change.
Testing and piloting AI is key to successful adoption, the study found, and the government bodies surveyed are doing well on this score, with 70% already doing so. The government also has its equivalent of corporate “centers of excellence,” such as the NHS AI Lab for health data applications, a data science campus at the Office of National Statistics, and a number of initiatives supported by the UK Research and Innovation program.
When it comes to implementing and scaling AI, even government workers need to understand the business need, the study found. They also need to identify who are the senior leaders with clear accountability for the success of AI initiatives, what are the desired outcomes, and how performance will be measured.
One area where government departments are falling behind is in considering the impact on staff, with the implications for the overall composition of the workforce and the skills they will require not yet considered in sufficient detail, the study found.
Government itself is one huge legacy system, so it’s little wonder that the need to address legacy IT systems figures large in the study. Data quality and consistency can cause issues for AI implementations, and plans for AI adoption may be dependent on wider digital transformation programs that need to be taken into account, it said.
Data infrastructure
Legacy loomed large in the study’s chapter on tackling infrastructure and digital enablers. The NAO noted that the government intends to have remediation plans in place its riskiest systems by 2025, but notes with understatement, “Fully addressing these legacy issues will take some time.”
Access to quality data is a barrier to implementation for 68% of the government bodies surveyed. The NAO highlighted work the CDDO is doing to overcome this barrier, including developing data maturity assessments, creating a centralized hub for data discovery, and establishing shared data assets — projects that would be at home in many enterprises too.
Safety standards
There is an element of caution to government departments’ approach. The NAO found that government bodies had flagged up a range of specific AI risks, including legal liability, the risk of inaccurate output, and security. More than half of the departments surveyed, they said, required support to help address these issues.
The study noted that existing tools for this could be more widely used. For instance, the Algorithmic Transparency Recording Standard (ATRS), designed to help government bodies improve transparency and report on the automated tools they are using, is underused despite being approved for government-wide use in 2022, it said. Only one-quarter of organizations surveyed for the study said they were “always or usually compliant” with ATRS, even though it is likely to become mandatory for government departments later this year.
The optimism expressed in the NAO study, with its focus on maximizing the potential gains, marks a contrast to some other governmental attitudes toward AI deployment.
An executive order issued by the Biden Administration in October highlighted safety and security, setting requirements for safety testing, mandating the creation of standards and testing for any AI meant for government use, and generally recognizing the potential dangers that generative AI may pose.
Another study, commissioned by the US State Department and completed last month, attempted to quantify those risks, describing the worst case as an “extinction-level threat to the human species,” according to media reports.
Artificial Intelligence, Generative AI, Government IT
Read More from This Article: UK public sector urged to ‘maximize the opportunities’ of gen AI
Source: News