Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

How to establish an effective AI GRC framework

Enterprise use of artificial intelligence comes with a wide range of risks in areas such as cybersecurity, data privacy, bias and discrimination, ethics, and regulatory compliance. As such, organizations that create a governance, risk, and compliance (GRC) framework specifically for AI are best positioned to get the most value out of the technology while minimizing its risks and ensuring responsible and ethical use.   

Most companies have work to do in this area. A recent survey of 2,920 worldwide IT and business decision-makers conducted by Lenovo and research firm IDC found that only 24% of organizations have fully enforced enterprise AI GRC policies.

“If organizations don’t already have a GRC plan in place for AI, they should prioritize it,” says Jim Hundemer, CISO at enterprise software provider Kalderos.

Generative AI “is a ubiquitous resource available to employees across organizations today,” Hundemer says. “Organizations need to provide employees with guidance and training to help protect the organization against risks such as data leakage, exposing confidential or sensitive information to public AI learning models, and hallucinations, [when] a model’s prompt response is inaccurate or incorrect.”

Recent reports have shown that one in 12 employee generative AI prompts include sensitive company data and that organizations are no closer to containing shadow AI’s data risks despite providing employees with sanctioned AI options.

Organizations need to incorporate AI into their GRC framework — and associated policies and standards — and data is at the heart of it all, says Kristina Podnar, senior policy director at the Data and Trust Alliance, a consortium of business and IT executives at major companies aiming to promote the responsible use of data and AI.

“As AI systems become more pervasive and powerful, it becomes imperative for organizations to identify and respond to those risks,” Podnar says.

Because AI introduces risks that traditional GRC frameworks may not fully address, such as algorithmic bias and lack of transparency and accountability for AI-driven decisions, an AI GRC framework helps organizations proactively identify, assess, and mitigate these risks, says Heather Clauson Haughian, co-founding partner at CM Law, who focuses on AI technology, data privacy, and cybersecurity.

“Other types of risks that an AI GRC framework can help mitigate include things such as security vulnerabilities where AI systems can be manipulated or exposed to data breaches, as well as operational failures when AI errors lead to costly business disruptions or reputational harm,” Haughian says.

For example, if a financial AI system makes flawed decisions, it could cause large-scale financial harm, Haughian says. In addition, AI-related laws are emerging globally, she says, which means organizations need to ensure data privacy, model transparency, and non-discrimination to stay compliant.

“An AI GRC plan allows companies to proactively address compliance instead of reacting to enforcement,” Haughian says.

Know the challenges ahead and at hand

IT and business leaders need to understand that creating and maintaining an AI GRC framework will not be easy.

“As attorneys we can often tell clients what to include in policies like an AI GRC policy [or] framework, but such advice should also be accompanied with advice to make sure organizations understand what challenges they are likely to face not only in drafting such policies, but also in implementing them,” Haughian says.

For example, advancements are happening so rapidly with AI that not only drafting but keeping AI GRC policies up to date is a challenge. “If the AI GRC polices are overly strict, organizations will begin to see that this stifles innovation or that certain groups within the organization will simply find ways to work around such policies — or flat out disregard them,” Haughian says.

CIOs have been battling such shadow AI use since the inception of generative AI. Establishing an effective, company-specific AI GRC strategy is the No. 1 way to prevent a shadow AI disaster.

How should organizations go about creating their AI GRC plan, and what should go into such a plan? Here’s what experts suggest.

Build a governance structure with accountability

Most organizations fail to establish a well-defined governance structure for AI, Data and Trust Alliance’s Podnar says. “Evaluating the existing GRC plan/framework and determining whether it can be extended or amended based on AI ought to be the first consideration for any organization,” he says.

Without clear roles and responsibilities, for example, who will own what decisions, organizational risks will be misaligned with AI deployments and the results will be brand or reputation risks, regulatory violations, and the inability to take advantage of the opportunities AI provides, Podnar says.

“Where organizations choose to place accountability and delegated authority is dependent on the organization and its culture,” Podnar says. “There is no right or wrong answer globally, but there is a right and a wrong answer for your organization.”

Incorporating policy control and accountability into an AI GRC framework “would essentially define the roles and responsibilities for AI governance and establish mechanisms for policy enforcement and accountability, thereby ensuring that there is clear ownership and oversight of AI initiatives and that individuals are held accountable for their actions,” Haughian says.

A comprehensive AI GRC plan can help ensure AI systems are explainable and understandable, “which is critical for trust and adoption [and] something that is beginning to be a large hurdle in many organizations,” Haughian says.

Make AI governance a team effort

AI crosses virtually every facet of the business, so the GRC framework should include input from a broad spectrum of participants.

“We typically begin with stakeholder identification and inclusion by engaging a diverse group of sponsors, leaders, users, and experts,” says Ricardo Madan, senior vice president and head of global services at IT service provider TEKsystems.

This includes IT, legal, human resources, compliance, and lines of business. “This ensures a holistic and unified approach to prioritizing governance matters, goals, and issues for framework creation,” Madan says. “At this stage, we also build or ratify the organization’s AI values and ethical standards. From there, we set the plan and cadence for continuous feedback, iterative improvement, and progress tracking against these priorities.”

This process takes into account evolving regulatory changes, advancements in AI functionality, emerging data insights, and ongoing AI innovation, Madan says.

Create an AI risk profile

Enterprises need to create a risk profile, with an understanding of the organization’s risk appetite, what information is sensitive to the organization, and the consequences of sensitive information exposure to public learning models, Hundemer says.

Technology leaders can work with senior business leaders to determine the proper risk appetite (risk vs. reward) for the company and its workforce, Hundemer says.

Detailing how the organization identifies, assesses, and mitigates AI-related risks, including regulatory compliance, “becomes so important because it helps the organization stay ahead of potential legal and financial liabilities, and ensures alignment with relevant regulations,” Haughian says.

Incorporate ethical principles and guidelines

CIOs have been grappling with the ethics of implementing AI alongside pressures to make good quickly on the technology of late. The need to incorporate ethical principles into AI GRC can’t be emphasized enough, because AI introduces all kinds of risks related to unethical use that can get enterprises in trouble.

This section of the GRC plan “should define the organization’s ethical stance on AI, covering areas like fairness, transparency, accountability, privacy, and human oversight,” Haughian says. “This practice will establish a moral compass for AI development and deployment, preventing unintended harm and building trust.”

One example of this is having a policy stating that AI systems must be designed to avoid perpetuating or amplifying existing biases, with regular audits for fairness, Haughian says. Another is to ensure that all AI-driven decisions, especially those that have a large impact on people’s lives, are explainable.

Incorporate AI model governance

Model governance and lifecycle management are also key components of an effective AI GRC strategy, Haughian says. “This would cover the entire AI model lifecycle, from data acquisition and model development to deployment, monitoring, and retirement,” she says.

This practice will help ensure AI models are reliable, accurate, and consistently perform as expected, mitigating risks associated with model drift or errors, Haughian says.

Some examples of this would be establishing clear procedures for data validation, model testing, and performance monitoring; creating a version control system for AI models, and logging all changes made to those models; and implementing a system for periodic model retraining, to ensure that the model stays relevant.

Make AI policies clear and enforceable 

Good policies balance out the risks and opportunities that AI and other emerging technologies, including those requiring massive data, can provide, Podnar says.

“Most organizations don’t document their deliberate boundaries via policy,” Podnar says. This leaves a lot of employees putting the organization at risk when they make up their own rules, or it handcuffs them from innovating and creating new products and services because of the perception that they must always ask IT before proceeding, she says.

“Organizations need to define clear policies covering responsibility, explainability, accountability, data management practices related to privacy and security and other aspects of operational risks and opportunities,” Podnar says.

Because all risks and industries are not equal, organizations need to focus on their own tolerance for risk with the business objectives that AI is intended to satisfy, Podnar says. Policies need to have enforcement mechanisms that are understood by users.

Get feedback and make refinements

It’s important to communicate AI guidelines to the entire organization — and seek feedback to enhance polices on an ongoing basis to better meet the needs of users.

[ See also: “10 things you should include in your AI policy” ]

TEKsystems constantly documents and reports on AI usage, performance, and framework testing based on user feedback, Madan says. “This is part of our ongoing commitment to refinement, audit readiness, and assurance,” he says.

“Since AI models change over time and require continuous monitoring, a strong governance process needs to be in place to ensure AI remains effective and compliant throughout its lifecycle,” Podnar says. “This may involve assessing and using model validation and monitoring protocols, [and] creating automated rules with alerts for when things go off the intended path.”


Read More from This Article: How to establish an effective AI GRC framework
Source: News

Category: NewsMay 16, 2025
Tags: art

Post navigation

PreviousPrevious post:US and UAE collaborate on AI megaproject to boost regional innovationNextNext post:The ERP system enabling Billerud to become totally data driven

Related posts

The AI-native generation is here. Don’t get left behind
May 21, 2025
Synthetic data’s fine line between reward and disaster
May 21, 2025
IBM’s massive SAP S/4HANA migration pays off
May 21, 2025
구글, 워비 파커와 손잡고 ‘AI 안경’ 개발 본격화···1억 5천만 달러 투자
May 21, 2025
칼럼 | ‘초연결’이 필수인 시대··· CIO·CISO를 위한 IoT 보안 체크리스트
May 21, 2025
블로그 | 사무실 복귀 정책의 역설? 과거 교훈을 되돌아볼 때
May 21, 2025
Recent Posts
  • The AI-native generation is here. Don’t get left behind
  • Synthetic data’s fine line between reward and disaster
  • IBM’s massive SAP S/4HANA migration pays off
  • 구글, 워비 파커와 손잡고 ‘AI 안경’ 개발 본격화···1억 5천만 달러 투자
  • 칼럼 | ‘초연결’이 필수인 시대··· CIO·CISO를 위한 IoT 보안 체크리스트
Recent Comments
    Archives
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.