Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

GAO report says DHS, other agencies need to up their game in AI risk assessment

A new report from the US Government Accountability Office (GAO) appears to indicate that no US federal agency reporting into the Department of Homeland Security (DHS) knows the full extent or probability of harm that AI can do the nation’s critical infrastructure.

In the report, released earlier this week, it concluded the DHS needs to improve its risk assessment guidance, noting there was one area in which every agency failed: “None of the assessments fully evaluated the level of risk in that they did not include a measurement that reflected both the magnitude of harm (level of impact) and the probability of an event occurring (likelihood of occurrence). Further, no agencies fully mapped mitigation strategies to risks, because the level of risk was not evaluated.”

AI, report authors noted, has the “potential to introduce improvements and rapidly change many areas. However, deploying AI may make critical infrastructure systems that support the nation’s essential functions, such as supplying water, generating electricity, and producing food, more vulnerable.”

Nobody knows the probability of harm

The GAO said it is “recommending that DHS act quickly to update its guidance and template for AI risk assessments to address the remaining gaps identified in this report.” DHS, in turn, it said, “agreed with our recommendation and stated it plans to provide agencies with additional guidance that addresses gaps in the report including identifying potential risks and evaluating the level of risk.”

Peter Rutten, research vice president at IDC, who specializes in performance intensive computing, said Friday that his take is, “indeed, no DHS agency knows the full extent or probability of harm that AI can do to the US critical infrastructure. I’d argue that, today, no entity knows the full extent or probability of harm that AI can do in general — whether it is an enterprise, government, academia, you name it.”

AI, he said, “is being pushed out to businesses and consumers by organizations that profit from doing so, and assessing and addressing the potential harm it may cause has until recently been an afterthought. We are now seeing more focus on these potential negative effects, but efforts to contain them, let alone prevent them, will always be far behind the steamroller of new innovations in the AI realm.”

Thomas Randall, research lead at Info-Tech Research Group, said, “it is interesting that the DHS had no assessments that evaluated the level of risk for AI use and implementation, but had largely identified mitigation strategies. What this may mean is the DHS is taking a precautionary approach in the time it was given to complete this assessment.”

Some risks, he said, “may be identified as significant enough to warrant mitigation regardless of precise quantification of that risk. Moreover, some broad mitigation strategies are valuable to implement regardless of specific risk (such as ensuring explainability or having regular audits).”

According to Randall, “given the lead agencies only had 90 days to complete their assessments, choosing to document broader risk mitigation strategies achieves broader value than individually evaluating the level of each risk. This is a task that should come next, though, now that use cases, risks, and broader mitigation strategies have been identified.”

The report noted that federal agencies with a lead role in protecting the nation’s critical infrastructure sectors, referred to as sector risk management agencies (SRMAs), were required to develop and submit initial risk assessments for each of the critical infrastructure sectors to DHS by January 2024, in coordination with DHS’s Cybersecurity and Infrastructure Security Agency (CISA).

However, it said, “although the agencies submitted the sector risk assessments to DHS as required, none fully addressed the six activities that establish a foundation for effective risk assessment and mitigation of potential artificial intelligence (AI) risks. For example, while all assessments identified AI use cases, such as monitoring and enhancing digital and physical surveillance, most did not fully identify potential risks, including the likelihood of a risk occurring.”

Rutten didn’t find this unreasonable. He noted, “it’s entirely fair that the agencies were unable to assess the extent or likelihood of harm. There are thousands of algorithms in circulation — some proprietary, some open source — each with its own development history, data used to train, accuracy rates, and hallucination probability, not to mention vulnerabilities.”

Not until preventing harm at the foundation of algorithm development becomes the norm (and mandatory), he said, will it be possible to determine how safe these algorithms are. “Investigating, testing, and assessing them all is impossible, not in the least because an algorithm may iterate harmlessly millions of times, and then suddenly make one crucial mistake,” he said. “In other words, the horse is out of the barn, and I don’t see how we are going to catch up with it for the foreseeable future.”


Read More from This Article: GAO report says DHS, other agencies need to up their game in AI risk assessment
Source: News

Category: NewsDecember 20, 2024
Tags: art

Post navigation

PreviousPrevious post:Sweat the small stuff: Data protection in the age of AINextNext post:WordPress.org statement threatens possible shutdown for all of 2025

Related posts

Barb Wixom and MIT CISR on managing data like a product
May 30, 2025
Avery Dennison takes culture-first approach to AI transformation
May 30, 2025
The agentic AI assist Stanford University cancer care staff needed
May 30, 2025
Los desafíos de la era de la ‘IA en todas partes’, a fondo en Data & AI Summit 2025
May 30, 2025
“AI 비서가 팀 단위로 지원하는 효과”···퍼플렉시티, AI 프로젝트 10분 완성 도구 ‘랩스’ 출시
May 30, 2025
“ROI는 어디에?” AI 도입을 재고하게 만드는 실패 사례
May 30, 2025
Recent Posts
  • Barb Wixom and MIT CISR on managing data like a product
  • Avery Dennison takes culture-first approach to AI transformation
  • The agentic AI assist Stanford University cancer care staff needed
  • Los desafíos de la era de la ‘IA en todas partes’, a fondo en Data & AI Summit 2025
  • “AI 비서가 팀 단위로 지원하는 효과”···퍼플렉시티, AI 프로젝트 10분 완성 도구 ‘랩스’ 출시
Recent Comments
    Archives
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.