Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

New framework aims to keep AI safe in US critical infrastructure

Analyst reaction to Thursday’s release by the US Department of Homeland Security (DHS) of a framework designed to ensure safe and secure deployment of AI in critical infrastructure is decidedly mixed.

Where did it come from?

According to a release issued by DHS, “this first-of-its kind resource was developed by and for entities at each layer of the AI supply chain: cloud and compute providers, AI developers, and critical infrastructure owners and operators — as well as the civil society and public sector entities that protect and advocate for consumers.”

Representatives from each sector sit on the Artificial Intelligence Safety and Security Board, a public-private advisory committee formed by DHS Secretary Alejandro N. Mayorkas, which, the release said, “determined the need for clear guidance on how each layer of the AI supply chain can do their part to ensure that AI is deployed safely and securely in US critical infrastructure.”

The board, formed in April, is made up of major software and hardware companies, critical infrastructure operators, public officials, the civil rights community, and academia, according to the release.

A once in a generation opportunity

Mayorkas explained the need for the framework in a report outlining the initiative, “AI is already altering the way Americans interface with critical infrastructure. New technology, for example, is helping to sort and distribute mail to American households, quickly detect earthquakes and predict aftershocks, and prevent blackouts and other electric-service interruptions. These uses do not come without risk, though: a false alert of an earthquake can create panic, and a vulnerability introduced by a new technology may risk exposing critical systems to nefarious actors.”

AI, he said, offers “a once in-a-generation opportunity to improve the strength and resilience of US critical infrastructure, and we must seize it while minimizing its potential harms. The framework, if widely adopted, will go a long way to better ensure the safety and security of critical services that deliver clean water, consistent power, internet access, and more.”

The release goes on to say that DHS identified three primary categories of AI safety and security vulnerabilities in critical infrastructure: “attacks using AI, attacks targeting AI systems, and design and implementation failures. To address these vulnerabilities, the framework recommends actions directed to each of the key stakeholders supporting the development and deployment of AI in US critical infrastructure.”

Industry asked for intervention

Naveen Chhabra, principal analyst with Forrester, said, “while average enterprises may not directly benefit from it, this is going to be an important framework for those that are investing in AI models.”

It is, he noted, not a final document, but “a living document, because we expect to see massive advancements in the AI space in the coming years.”

Asked why he thinks DHS felt the need to create the framework, Chhabra said that developments in the AI industry are “unique, in the sense that the industry is going back to the government and asking for intervention in ensuring that we, collectively, develop safe and secure AI.”

The question, he said, is why the industry needs to do so. “Because in AI developments we are (intentionally) developing something that is going to be thousands/million times more intelligent than humans,” he explained. “Until AGI [artificial general intelligence] becomes a reality, we will continue to build use-case specific AI. No species ever has been intelligent/smarter than humans and we have never seen that play out in the human history. What if it goes rogue, what if it is uncontrolled, what if it becomes the next arms race, how will the national security be ensured?”

Create a level playing field

Echoing similar thoughts, Peter Rutten, research vice president at IDC, who specializes in performance intensive computing, said Friday that guidelines to secure the AI development and deployment that happens within organizations and within DHS itself, or any other government department in the US, are absolutely critical. IDC research reveals that security is the number one concern in any sector, be it the enterprise, academia, or government.

“Everybody is worried not just that they will be exposing their data, or that their data is going to be misused, but also what then that means for their reputation, for their revenue streams,” he said. “If you make a big mistake and data is being compromised … there will be an uproar, and there have been uproars already, so this is a prime concern.”

“There has been a lot of criticism about how generative AI might become discriminating, how it might hide malicious content,” Rutten said. “Even in the [tech] industry, from the likes of OpenAI and other developers of AI algorithms, there’s enormous amount of concern about how these algorithms might be misused, how malicious content might get in there, how people with bad intentions might get access to them.”

“[People] have been calling for regulation,” he continued. “They have been asking for the government to do something that would create a level playing field for everybody to stick with certain rules.”

There is, he said, “almost a desire for some lawmaking, so that people know how to go about doing this, what is expected from them, but also that they know that their competitors have to abide by the same rules, so that there is no disadvantage if you follow the rules. There is definitely a lot of demand for that.”

Guidelines face challenges

Meanwhile, Bill Wong, research fellow at Info-Tech Research Group, had differing thoughts, even though he agrees that a framework that is calling attention to AI makes sense, given how so many organizations are introducing AI-based solutions and thus changing their operations.

He said that the proposed guidelines face a number of challenges if they are to be adopted. “There has not been a history of organizations adopting government recommendations that are voluntary for several reasons, including government priorities not aligned with priorities from private sector organizations, insufficient funds, or the lack of expertise and resources required to implement government guidelines (such as the proposed AI risk-based management system),” he explained.

In addition, Wong noted, the 24 AI Safety and Security Board members, who represent a who’s who in AI, are probably not the best people to ask how to implement an AI risk management system. “The government already has this expertise, and should have leveraged the NIST AI Risk Management Framework (another example of leveraging existing resources and deliverables),” he said. “Hopefully, we will see this framework continue to evolve.”

While the idea of introducing heightened attention to AI and its use in organizations managing critical infrastructure is good, he said, “it is confusing why the DHS report focuses on Roles and Responsibilities, which is operational, and the recommendations come across as mandates or regulations like the EU AI Act.”

Wong added that, while many critical infrastructure organizations, such as utilities, are still developing their AI strategy, “it would be more useful (in my opinion) to focus on helping organizations with their AI strategy with a strong focus on Responsible AI, and introduce examples of how to operationalize the Responsible AI principles the organizations will establish.”

Another step in AI governance

Like Chhabra, David Brauchler, technical director at cybersecurity vendor NCC sees the guidelines as a beginning, pointing out that frameworks like this are just a starting point for organizations, providing them with big picture guidelines, not roadmaps. He described the DHS initiative in an email as “representing another step in the ongoing evolution of AI governance and security that we’ve seen develop over the past two years. It doesn’t revolutionize the discussion (nor does it aim to), but it aligns many of the concerns associated with AI/ML systems with their relevant stakeholders.”

Overall, he said, this document serves as an acknowledgement that the security and privacy fundamentals that have applied to software systems historically also apply to AI today. The framework, said Brauchler, “also recognizes that AI introduces new risks in terms of privacy and automation, and organizations have a responsibility to ensure that the data of their users is safeguarded, and that these systems are properly protected with human oversight when implemented into critical risk applications, such as national infrastructure.”


Read More from This Article: New framework aims to keep AI safe in US critical infrastructure
Source: News

Category: NewsNovember 16, 2024
Tags: art

Post navigation

PreviousPrevious post:기업에 ‘AI 윤리 전문가’가 필요할까?NextNext post:The enterprise service revolution: Supercharging ESM with AI

Related posts

Barb Wixom and MIT CISR on managing data like a product
May 30, 2025
Avery Dennison takes culture-first approach to AI transformation
May 30, 2025
The agentic AI assist Stanford University cancer care staff needed
May 30, 2025
Los desafíos de la era de la ‘IA en todas partes’, a fondo en Data & AI Summit 2025
May 30, 2025
“AI 비서가 팀 단위로 지원하는 효과”···퍼플렉시티, AI 프로젝트 10분 완성 도구 ‘랩스’ 출시
May 30, 2025
“ROI는 어디에?” AI 도입을 재고하게 만드는 실패 사례
May 30, 2025
Recent Posts
  • Barb Wixom and MIT CISR on managing data like a product
  • Avery Dennison takes culture-first approach to AI transformation
  • The agentic AI assist Stanford University cancer care staff needed
  • Los desafíos de la era de la ‘IA en todas partes’, a fondo en Data & AI Summit 2025
  • “AI 비서가 팀 단위로 지원하는 효과”···퍼플렉시티, AI 프로젝트 10분 완성 도구 ‘랩스’ 출시
Recent Comments
    Archives
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.