Shortcomings in incident reporting are leaving a dangerous gap in the regulation of AI technologies.
In other safety-critical industries, such as aviation and medicine, incidents are tracked and investigated but such incident reporting is lacking in the increasingly important are of AI, warns the Centre for Long-Term Resilience (CLTR), a UK think tank.
Incidents where AI systems unexpectedly malfunction or produce erroneous outputs when faced with situations outside their training data are becoming a growing problem as AI systems are increasingly deployed in critical real-world applications.
Problems can also arise where AI’s objectives are improperly defined or where the system’s behaviour cannot be adequately verified or controlled.
Notable examples of AI safety incidents include:
- Trading algorithms causing market “flash crashes”;
- Facial recognition systems leading to wrongful arrests;
- Autonomous vehicle accidents;
- AI models providing harmful or misleading information through social media channels.
Incident reporting can help AI researchers and developers to learn from past failures. By documenting cases where automated systems misbehave, glitch or jeopardize users, we can better discern problematic patterns and mitigate risks.
Novel problems
Without an adequate incident reporting framework, systemic problems could set in.
AI systems could directly harm the public, for example through improperly revoking access to social security payments, according to CLTR, which looked closely at the situation in the UK, although its findings could also apply to many other countries.
The UK government’s Department for Science, Innovation & Technology (DSIT) lacks a central, up-to-date picture of incidents involving AI systems as they emerge, according to CLTR. “Though some regulators will collect some incident reports, we find that this is not likely to capture the novel harms posed by frontier AI,” it said, referring to the high-powered generative AI models at the cutting edge of the industry.
DSIT should establish a framework for reporting public sector AI incidents, it said, with regulators tasked with identifying gaps in existing incident-handling procedures, based on expert advice.
Lastly, CLTR said, capacity to monitor, investigate, and respond to incidents needs to be enhanced through measures such as the establishment of a pilot AI incident database.
AI-specific reporting regulations
Industry experts offered a mixed but broadly positive reaction to CLTR ‘s report.
Ivana Bartoletti, chief privacy and AI governance officer at Wipro and co-founder of the think tank Women Leading in AI, reacted positively to CLTR’s call to improve incident response.
“Incident reporting is a key part of AI governance at both government and business level,” Bartoletti told CIO.com. “I believe that incident analysis can play a key role in informing regulatory responses, tailoring policies and driving governance initiatives.”
Crystal Morin, cybersecurity strategist at Sysdig, argued existing regulatory frameworks were adequate.
“When it comes to reporting security incidents that involve AI workloads, AI-specific reporting regulations seem unnecessary when comprehensive regulatory guidelines, such as NIS2, exist,” according to Morin.
Veera Siivonen, CCO and partner at Saidot, argued for a “balance between regulation and innovation, providing guardrails without narrowing the industry’s potential for experimentation” with the development of artificial intelligence technologies.
Industry-specific needs
AI technology is evolving fast, and its regulation is still in its infancy but organisations can still take action to position themselves for the future.
Nayan Jain, executive head of AI at digital studio ustwo, argued that AI governance of products should be separated by industry and not centralised, as different approaches are needed in different industries.
“In addition to having remediation and mitigation steps in place, it is important to accept that AI itself can be used to monitor live systems, report incidents, and even help manage risk by providing automated solutions or fixes,” said Jain. “Compliance certifications and standards will emerge by industry as we’ve seen with the cloud and software development broadly.”
Jain added that being transparent about the use of AI in software and providing channels for users to flag incidents is important.
Captain’s log
To effectively record and manage AI incidents, enterprises could implement a comprehensive incident logging system to document all AI-related issues, including unexpected behaviours, errors, biases, or security breaches.
Real-time monitoring tools are essential, according to Luke Dash, CEO of risk management platform ISMS.online.
“Implementing robust version control for AI models and datasets is crucial to track changes and allow for rollbacks if necessary,” Dash said. “It’s then important to regularly test and validate AI systems to help identify potential issues proactively.”
Adopting the ISO 42001 standard — a framework for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) — would help organisations manage AI incidents and develop governance strategies, according to Dash
Dash advised that to effectively combine AI governance strategies with incident reporting, enterprises should establish an AI ethics committee to oversee both governance and incident management, with input from development teams, legal departments, and risk management.
Whistleblowing
Raising the alarm about problems in AI systems also raises questions about employment law.
“If a company is using AI in a way that breaks the law or endangers health and safety and whistleblowers report this directly to their employer then they will be protected against retaliation under the current regime,” Will Burrows, parter at Bloomsbury Square Employment Law, told CIO.com.
“If there is a wider incident-reporting regime then whistleblowing laws need to be extended to ensure that whistleblowers are protected when they report AI incidents to DSIT.”
If a report to DSIT is the first port of call for a whistleblower, under the current regime they may not be protected.
“The law will need to change to give first-instance whistleblowers to DSIT protection,” Burrows said. “We would also welcome regulatory intervention to assist whistleblowers against retaliation.”
Burrows warned of the potential for group litigation claims for harm caused by AI, all the more reason for companies to encourage internal staff to report problems.
“Whistleblowers often spot issues at an early stage and therefore ought to be listened to and not silenced,” said Burrows, who added that organisations should set up formal internal procedures for reporting AI incidents.
Read More from This Article: AI incident reporting shortcomings leave regulatory safety hole
Source: News