Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

White House requires agencies to create AI safeguards, appoint CAIOs

US government agencies will need to provide human oversight to AI models that make critical decisions about healthcare, employment, and other critical issues affecting people to comply with a new policy from the White House Office of Management and Budget (OMB).

The AI use policy, announced Thursday, requires agencies to hire chief AI officers and to put safeguards in place to protect human rights and maintain public safety.

“While AI is improving operations and service delivery across the Federal Government, agencies must effectively manage its use,” the policy says. “With appropriate safeguards in place, AI can be a helpful tool for modernizing agency operations and improving Federal Government service to the public.”

The 34-page policy requires most agencies, excepting the Department of Defense and intelligence agencies, to inventory their AI use annually. Agencies must also continually monitor their AI use.

The OMB policy will, for example, allow airline travelers to opt out of the use of the Transportation Security Administration’s (TSA’s) use of facial recognition software, according to a fact sheet issued with the policy.

Other examples from the fact sheet: When AI is used in the federal healthcare system to support diagnostic decisions, a human will be required to verify the AI’s results. When an AI is used to detect fraud in government services, a human will be required to review the results, and affected people will be able to seek remedies for any harm the AI created.

AI’s impact on public safety

The policy defines several uses of AI that could impact public safety and human rights, and it requires agencies to put safeguards in place by Dec. 1. The safeguards must include ways to mitigate the risks of algorithmic discrimination and provide the public with transparency into government AI use.

Agencies must stop using AIs that can’t meet the safeguards. They must notify the public of any AI exempted from complying with the OMB policy and explain the justifications.

AIs that control dams, electrical grids, traffic control systems, vehicles, and robotic systems within workplaces fall under safety-impacting AIs. Meanwhile, AIs that block or remove protected speech, produce risk assessments of individuals for law enforcement agencies, and conduct biometric identification are classified as rights-impacting. AI decisions about healthcare, housing, employment, medical diagnosis, and immigration status also fall into the rights-impacting category.

The OMB policy also calls on agencies to release government-owned AI code, models, and data, when the releases do not pose a risk to the public or government operations.

The new policy received mixed reviews from some human rights and digital rights groups. The American Civil Liberties Union called the policy an important step toward protecting US residents against AI abuses. But the policy has major holes in it, including broad exceptions for national security systems and intelligence agencies, the ACLU noted. The policy also has exceptions for sensitive law enforcement information.

“Federal uses of AI should not be permitted to undermine rights and safety, but harmful and discriminatory uses of AI by national security agencies, state governments, and more remain largely unchecked,” Cody Venzke, senior policy counsel with the ACLU, said in a statement. “Policymakers must step up to fill in those gaps and create the protections we deserve.”  

Congressional action is also needed because the OMB policy doesn’t apply to private industry, added Nick Garcia, policy counsel at Public Knowledge, a digital rights group.

“This is an instance of the federal government leading by example, and it shows what we should expect of those with the power and resources to ensure that AI technology is used safely and responsibly,” he said.

Leading by example

Garcia said he hopes the standards and practices for responsible AI use in the OMB policy will serve as a “good testing ground” for future rules. “We need significant government support for public AI research and resources, a comprehensive privacy law, stronger antitrust protections, and an expert regulatory agency with the resources and authority to keep up with the pace of innovation,” he added.

However, another digital rights group, the Center for Democracy and Technology, applauded the OMB policy, saying it allows the federal government to lead by example. The policy also will give the federal government a consistent set of AI rules and will give the public transparency about government AI use, the CDT said.

While much of the policy’s focus is on safety and human rights, it also encourages government agencies to explore responsible use of AI, noted Kevin Smith, chief product officer at DISCO, an AI-powered legal technology company.

The OMB’s approach differs from the EU’s AI Act, which leans more into the risks of AI, Smith said. “The OMB’s approach encourages agencies to adopt AI on their own terms, allowing for risk assessment, reporting, and accountability,” he added.

The OMB policy follows an October executive order from President Joe Biden outlining safe AI use.

“This next step is akin to encouragement with transparency,” Smith said. “The administration didn’t set itself up to fail or unnecessarily curtail innovative thinking, which was smart considering how rapidly AI is advancing.”

Artificial Intelligence, Generative AI, Government IT, IT Governance


Read More from This Article: White House requires agencies to create AI safeguards, appoint CAIOs
Source: News

Category: NewsMarch 28, 2024
Tags: art

Post navigation

PreviousPrevious post:4 lessons healthcare can teach us about successful applications of AINextNext post:Robust remote access security for the utilities sector advances with Zero Trust

Related posts

휴먼컨설팅그룹, HR 솔루션 ‘휴넬’ 업그레이드 발표
May 9, 2025
Epicor expands AI offerings, launches new green initiative
May 9, 2025
MS도 합류··· 구글의 A2A 프로토콜, AI 에이전트 분야의 공용어 될까?
May 9, 2025
오픈AI, 아시아 4국에 데이터 레지던시 도입··· 한국 기업 데이터는 한국 서버에 저장
May 9, 2025
SAS supercharges Viya platform with AI agents, copilots, and synthetic data tools
May 8, 2025
IBM aims to set industry standard for enterprise AI with ITBench SaaS launch
May 8, 2025
Recent Posts
  • 휴먼컨설팅그룹, HR 솔루션 ‘휴넬’ 업그레이드 발표
  • Epicor expands AI offerings, launches new green initiative
  • MS도 합류··· 구글의 A2A 프로토콜, AI 에이전트 분야의 공용어 될까?
  • 오픈AI, 아시아 4국에 데이터 레지던시 도입··· 한국 기업 데이터는 한국 서버에 저장
  • SAS supercharges Viya platform with AI agents, copilots, and synthetic data tools
Recent Comments
    Archives
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.