Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

Your biggest AI risk might be that employees don’t know they’re using it

When assessing AI risk, organizations often focus on the most complex threats: algorithmic bias, intellectual property concerns or emerging regulation. But one of the fastest-growing and most overlooked risks is far simpler — employees may not realize they’re using AI at all.

AI is no longer confined to enterprise innovation labs or data science teams. It’s embedded in everyday workflows through tools like Microsoft Copilot, Google Gemini, email summarizers, CRM chatbots and recruiting platforms. Many employees are using AI daily, often without realizing it.

Nearly all Americans use products that involve AI features, but nearly two-thirds (64%) don’t realize it. Meanwhile, only 24% of workers who received job training in 2024 say it was related to AI use. However, employees are using AI, intentionally or not, and the fear, confusion and unclear policies around its use can create unintentional and unexpected problems.

The result: growing exposure, under-the-table use and policies that may look good on paper but are functionally invisible in practice.

Awareness is the missing link between policy and practice

Strong AI policies are essential: they define expectations, articulate principles, and set the guardrails for responsible use. But policy alone isn’t enough. Even the most well-crafted frameworks risk falling short without a corresponding investment in awareness and enablement.

Employees can’t follow what they don’t fully understand. Many are unaware when AI capabilities are embedded in the tools they use or what responsibilities come with those interactions. Closing that gap requires more than publishing rules; it demands ongoing education and contextual support, especially in decentralized, fast-moving environments.

5 key considerations for building AI literacy and reducing risk

Without a clear understanding of AI tools and policies, there’s a risk of unintentional misuse, shadow AI practices and inconsistent adherence to governance frameworks. Here are five key considerations to help close those knowledge gaps and build an enterprise-wide culture of AI literacy and risk awareness.

1. Start with awareness, not just rules

With generative and predictive tools embedded in everyday platforms, most users engage with AI passively and often unknowingly. That’s why the first step in any enablement effort must be awareness.

Employees should be introduced to AI in an accessible way that is relevant to their workflow and grounded in real examples. It’s not enough to say, “Don’t upload sensitive information to AI tools.” People need to understand what qualifies as an AI tool, when they’re using one and why certain behaviors create risk.

Start with easy-to-grasp definitions. Use language that resonates with non-technical teams. Frame the message not as a restriction but as a shared responsibility — one that protects the organization and empowers smarter decisions at the front lines.

2. Involve employees in shaping the policies

When people feel ownership over the tools and rules that shape their work, they’re far more likely to understand, remember and apply them. For example, asking a group of employees to read the draft AI policy and provide feedback on unclear or overly technical language can spark valuable cross-functional dialogue, reveal gaps in understanding and directly inform revisions to make the final policy more approachable. More importantly, it sends a clear message: This isn’t a top-down document written in a legal or technical vacuum — it’s meant to work in practice.

This kind of participatory approach transforms policy from a static document into a shared standard. It builds credibility and promotes adoption across departments, particularly in complex organizations.

3. Use the “drip method” to reinforce learning

Research on the forgetting curve shows that learners forget more than 50% of new information within an hour of learning it — and even more within a week without reinforcement.

That’s why one-off policy briefings and static training modules often fail to create lasting behavioral change. Instead, organizations should adopt the drip method: delivering small, focused messages at regular intervals through the platforms employees already use, such as email, Slack, Microsoft Teams, or internal dashboards.

This microlearning approach boosts retention and builds long-term familiarity. And when tailored to real-time tools, use cases and evolving regulatory risks, it becomes not just reinforcement but strategic enablement.

4. Tailor training by role and risk

Not all AI use is created equal. A developer using generative models to write code faces different exposures than a marketer using an AI-enabled writing tool. Likewise, executives making decisions based on predictive analytics carry a different set of responsibilities than customer service reps interacting with chatbot platforms.

Risk exposure should drive training depth. Higher-risk roles may need more frequent refreshers or scenario-based simulations, while lower-risk teams may benefit from just-in-time reminders or onboarding briefings.

Create modular learning paths tailored by job function, geography and toolset. Consider the regulatory implications of location. For example, employees in the EU may need to follow different transparency protocols under the EU AI Act than their US counterparts, even when using the same tool. Training must reflect those distinctions.

5. Measure both completion and comprehension

Training metrics often default to completion rates, but a finished module doesn’t guarantee understanding. One of the biggest red flags in any enablement program is silence. When employees aren’t asking questions, offering feedback or flagging uncertainty, it may signal disengagement rather than understanding. Track both quantitative and qualitative indicators, such as the following.

Quantitative metrics can include:

  • Percentage of employees who complete required training
  • Time spent on modules
  • Help desk tickets related to AI tools or policy questions

Qualitative insights may come from:

  • Feedback surveys following training
  • Focus groups or pilot testing for new tools
  • Informal conversations with team leads about what’s working and what’s not

These signals help organizations spot knowledge gaps early and adjust communications accordingly. They also support a more adaptive approach to governance — one where education and oversight evolve in tandem with the increasing use of AI across the business.

Turning awareness into operational strength

As AI continues to integrate into everyday workflows, organizations must start investing in the awareness, understanding and behavior change needed to support AI governance. That means treating AI literacy as an enterprise competency, not just a compliance checkbox.

The risks of inaction are unintentional misuse, inconsistent adoption, growing regulatory exposure and erosion of trust in these new technologies. But the opportunity is just as significant. By enabling employees to recognize, question and engage responsibly with AI, organizations empower their workforce to innovate with clarity and confidence. That’s the real goal of AI enablement: not just protecting the business from what could go wrong but preparing it to move forward successfully in an AI-enabled world.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?


Read More from This Article: Your biggest AI risk might be that employees don’t know they’re using it
Source: News

Category: NewsSeptember 5, 2025
Tags: art

Post navigation

PreviousPrevious post:Effective risk reporting to the board: Bridging technology and businessNextNext post:Breaking into cybersecurity without a technical degree: A practical guide

Related posts

オプトインからオプトアウトへ―次世代医療基盤法が変えた医療データのルール
December 13, 2025
AI ROI: How to measure the true value of AI
December 13, 2025
Analytics capability: The new differentiator for modern CIOs
December 12, 2025
Stop running two architectures
December 12, 2025
法令だけでは足りない―医療情報ガイドラインと医療DXのリアル
December 12, 2025
SaaS price hikes put CIOs’ budgets in a bind
December 12, 2025
Recent Posts
  • オプトインからオプトアウトへ―次世代医療基盤法が変えた医療データのルール
  • AI ROI: How to measure the true value of AI
  • Analytics capability: The new differentiator for modern CIOs
  • Stop running two architectures
  • 法令だけでは足りない―医療情報ガイドラインと医療DXのリアル
Recent Comments
    Archives
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.