Analyst reaction to Thursday’s release by the US Department of Homeland Security (DHS) of a framework designed to ensure safe and secure deployment of AI in critical infrastructure is decidedly mixed.
Where did it come from?
According to a release issued by DHS, “this first-of-its kind resource was developed by and for entities at each layer of the AI supply chain: cloud and compute providers, AI developers, and critical infrastructure owners and operators — as well as the civil society and public sector entities that protect and advocate for consumers.”
Representatives from each sector sit on the Artificial Intelligence Safety and Security Board, a public-private advisory committee formed by DHS Secretary Alejandro N. Mayorkas, which, the release said, “determined the need for clear guidance on how each layer of the AI supply chain can do their part to ensure that AI is deployed safely and securely in US critical infrastructure.”
The board, formed in April, is made up of major software and hardware companies, critical infrastructure operators, public officials, the civil rights community, and academia, according to the release.
A once in a generation opportunity
Mayorkas explained the need for the framework in a report outlining the initiative, “AI is already altering the way Americans interface with critical infrastructure. New technology, for example, is helping to sort and distribute mail to American households, quickly detect earthquakes and predict aftershocks, and prevent blackouts and other electric-service interruptions. These uses do not come without risk, though: a false alert of an earthquake can create panic, and a vulnerability introduced by a new technology may risk exposing critical systems to nefarious actors.”
AI, he said, offers “a once in-a-generation opportunity to improve the strength and resilience of US critical infrastructure, and we must seize it while minimizing its potential harms. The framework, if widely adopted, will go a long way to better ensure the safety and security of critical services that deliver clean water, consistent power, internet access, and more.”
The release goes on to say that DHS identified three primary categories of AI safety and security vulnerabilities in critical infrastructure: “attacks using AI, attacks targeting AI systems, and design and implementation failures. To address these vulnerabilities, the framework recommends actions directed to each of the key stakeholders supporting the development and deployment of AI in US critical infrastructure.”
Industry asked for intervention
Naveen Chhabra, principal analyst with Forrester, said, “while average enterprises may not directly benefit from it, this is going to be an important framework for those that are investing in AI models.”
It is, he noted, not a final document, but “a living document, because we expect to see massive advancements in the AI space in the coming years.”
Asked why he thinks DHS felt the need to create the framework, Chhabra said that developments in the AI industry are “unique, in the sense that the industry is going back to the government and asking for intervention in ensuring that we, collectively, develop safe and secure AI.”
The question, he said, is why the industry needs to do so. “Because in AI developments we are (intentionally) developing something that is going to be thousands/million times more intelligent than humans,” he explained. “Until AGI [artificial general intelligence] becomes a reality, we will continue to build use-case specific AI. No species ever has been intelligent/smarter than humans and we have never seen that play out in the human history. What if it goes rogue, what if it is uncontrolled, what if it becomes the next arms race, how will the national security be ensured?”
Create a level playing field
Echoing similar thoughts, Peter Rutten, research vice president at IDC, who specializes in performance intensive computing, said Friday that guidelines to secure the AI development and deployment that happens within organizations and within DHS itself, or any other government department in the US, are absolutely critical. IDC research reveals that security is the number one concern in any sector, be it the enterprise, academia, or government.
“Everybody is worried not just that they will be exposing their data, or that their data is going to be misused, but also what then that means for their reputation, for their revenue streams,” he said. “If you make a big mistake and data is being compromised … there will be an uproar, and there have been uproars already, so this is a prime concern.”
“There has been a lot of criticism about how generative AI might become discriminating, how it might hide malicious content,” Rutten said. “Even in the [tech] industry, from the likes of OpenAI and other developers of AI algorithms, there’s enormous amount of concern about how these algorithms might be misused, how malicious content might get in there, how people with bad intentions might get access to them.”
“[People] have been calling for regulation,” he continued. “They have been asking for the government to do something that would create a level playing field for everybody to stick with certain rules.”
There is, he said, “almost a desire for some lawmaking, so that people know how to go about doing this, what is expected from them, but also that they know that their competitors have to abide by the same rules, so that there is no disadvantage if you follow the rules. There is definitely a lot of demand for that.”
Guidelines face challenges
Meanwhile, Bill Wong, research fellow at Info-Tech Research Group, had differing thoughts, even though he agrees that a framework that is calling attention to AI makes sense, given how so many organizations are introducing AI-based solutions and thus changing their operations.
He said that the proposed guidelines face a number of challenges if they are to be adopted. “There has not been a history of organizations adopting government recommendations that are voluntary for several reasons, including government priorities not aligned with priorities from private sector organizations, insufficient funds, or the lack of expertise and resources required to implement government guidelines (such as the proposed AI risk-based management system),” he explained.
In addition, Wong noted, the 24 AI Safety and Security Board members, who represent a who’s who in AI, are probably not the best people to ask how to implement an AI risk management system. “The government already has this expertise, and should have leveraged the NIST AI Risk Management Framework (another example of leveraging existing resources and deliverables),” he said. “Hopefully, we will see this framework continue to evolve.”
While the idea of introducing heightened attention to AI and its use in organizations managing critical infrastructure is good, he said, “it is confusing why the DHS report focuses on Roles and Responsibilities, which is operational, and the recommendations come across as mandates or regulations like the EU AI Act.”
Wong added that, while many critical infrastructure organizations, such as utilities, are still developing their AI strategy, “it would be more useful (in my opinion) to focus on helping organizations with their AI strategy with a strong focus on Responsible AI, and introduce examples of how to operationalize the Responsible AI principles the organizations will establish.”
Another step in AI governance
Like Chhabra, David Brauchler, technical director at cybersecurity vendor NCC sees the guidelines as a beginning, pointing out that frameworks like this are just a starting point for organizations, providing them with big picture guidelines, not roadmaps. He described the DHS initiative in an email as “representing another step in the ongoing evolution of AI governance and security that we’ve seen develop over the past two years. It doesn’t revolutionize the discussion (nor does it aim to), but it aligns many of the concerns associated with AI/ML systems with their relevant stakeholders.”
Overall, he said, this document serves as an acknowledgement that the security and privacy fundamentals that have applied to software systems historically also apply to AI today. The framework, said Brauchler, “also recognizes that AI introduces new risks in terms of privacy and automation, and organizations have a responsibility to ensure that the data of their users is safeguarded, and that these systems are properly protected with human oversight when implemented into critical risk applications, such as national infrastructure.”
Read More from This Article: New framework aims to keep AI safe in US critical infrastructure
Source: News