Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

Get data, and the data culture, ready for AI

When it comes to AI adoption, the gap between ambition and execution can be impossible to bridge. Companies are trying to weave the tech into products, workflows, and strategies, but good intentions often collapse under the weight of the day-to-day realities from messy data and lack of a clear plan.

“That’s the challenge we see most often across the global manufacturers we work with,” says Rob McAveney, CTO at software developer Aras. “Many organizations assume they needAI, when the real starting point should be defining the decision you want AI to support, and making sure you have the right data behind it.”

Nearly two-thirds of leaders say their organizations have struggled to scale AI across the business, according to a recent McKinsey global survey. Often, they can’t move beyond tests of pilot programs, a challenge that’s even more pronounced among smaller organizations. Often, pilots fail to mature, and investment decisions become harder to justify.

A typical issue is the data simply isn’t ready for AI. Teams try to build sophisticated models on top of fragmented sources or messy data, hoping the technology will smooth over the cracks.

“From our perspective, the biggest barriers to meaningful AI outcomes are data quality, data consistency, and data context,” McAveney says. “When data lives in silos or isn’t governed with shared standards, AI will simply reflect those inconsistencies, leading to unreliable or misleading outcomes.”

It’s an issue that impacts almost every sector. Before organizations double down on new AI tools, they must first build stronger data governance, enforce quality standards, and clarify who actually owns the data meant to fuel these systems.

Making sure AI doesn’t take the wheel

In the rush to adopt AI, many organizations forget to ask the fundamental questionofwhat problem actually needs to be solved. Without that clarity, it’s difficult to achieve meaningful results.

Anurag Sharma, CTO of VyStar Credit Union believes AI is just another tool that’s available to help solve a given business problem, and says every initiative should begin with a clear, simple statement of the business outcome it’s meant to deliver. He encourages his team to isolate issues AI could fix, and urges executives to understand what will change and who will be affected before anything moves forward.

“CIOs and CTOs can keep initiatives grounded by insisting on this discipline, and by slowing down the conversation just long enough to separate the shiny from the strategic,” Sharma says.

This distinction becomes much easier when an organization has an AI COE or a dedicated working group focused on identifying real opportunities. These teams help sift through ideas, set priorities, and ensure initiatives are grounded in business needs rather than buzz.

The group should also include the people whose work will be affected by AI, along with business leaders, legal and compliance specialists, and security teams. Together, they can define baseline requirements that AI initiatives must meet.

“When those requirements are clear up front, teams can avoid pursuing AI projects that look exciting but lack a real business anchor,” says Kayla Underkoffler, director of AI security and policy advocacy at security and governance platform Zenity.

She adds that someone in the COE should have a solid grasp of the current AI risk landscape. That person should be ready to answer critical questions, knowing what concerns need to be addressed before every initiative goes live.

“A plan could have gaping cracks the team isn’t even aware of,” Underkoffler says. “It’s critical that security be included from the beginning to ensure the guardrails and risk assessment can be added from the beginning and not bolted on after the initiative is up and running.”

In addition, there should be clear, measurable business outcomes to make sure the effort is worthwhile. “Every proposal must define success metrics upfront,” says Akash Agrawal, VP of DevOps and DevSecOps at cloud-based quality engineering platform LambdaTest, Inc. “AI is never explored, it’s applied.”

He recommends companies build in regular 30- or 45-day checkpoints to ensure the work continues to align with business objectives. And if the results don’t meet expectations, organizations shouldn’t hesitate to reassess and make honest decisions, he says. Even if that means walking away from the initiative altogether.

Yet even when the technology looks promising, humans still need to remain in the loop. “In an early pilot of our AI-based lead qualification, removing human review led to ineffective lead categorization,” says Shridhar Karale, CIO at sustainable waste solutions company, Reworld. “We quickly retuned the model to include human feedback, so it continually refines and becomes more accurate over time.”

When decisions are made without human validation, organizations risk acting on faulty assumptions or misinterpreted patterns. The aim isn’t to replace people, but to build a partnership in which humans and machines strengthen one other.

Data, a strategic asset

Ensuring data is managed effectively is an often overlooked prerequisite for making AI work as intended. Creating the right conditions means treating data as a strategic asset: organizing it, cleaning it, and having the right policies in place so it stays reliable over time.

“CIOs should focus on data quality, integrity, and relevance,” says Paul Smith, CIO at Amnesty International. His organization works with unstructured data every day, often coming from external sources. Given the nature of the work, the quality of that data can be variable. Analysts sift through documents, videos, images, and reports, each produced in different formats and conditions. Managing such a high volume of messy, inconsistent, and often incomplete information has taught them the importance of rigor.

“There’s no such thing as unstructured data, only data that hasn’t yet had structure applied to it,” Smith says. He also urges organizations to start with the basics of strong, everyday data-governance habits. That means checking whether the data is relevant, and ensuring it’s complete, accurate, and consistent, and outdated information can skew results.

Smith also emphasizes the importance of verifying data lineage. That includes establishing provenance — knowing where the data came from and whether its use meets legal and ethical standards — and reviewing any available documentation that details how it was collected or transformed.

In many organizations, messy data comes from legacy systems or manual entry workflows. “We strengthen reliability by standardizing schemas, enforcing data contracts, automating quality checks at ingestion, and consolidating observability across engineering,” says Agrawal.

When teams trust the data, their AI outcomes improve. “If you can’t clearly answer where the data came from and how trustworthy is it, then you aren’t ready,” Sharma adds. “It’s better to slow down upfront than chase insights that are directionally wrong or operationally harmful, especially in the financial industry where trust is our currency.”

Karale says that at Reworld, they’ve created a single source of truth data fabric, and assigned data stewards to each domain. They also maintain a living data dictionary that makes definitions and access policies easy to find with a simple search. “Each entry includes lineage and ownership details so every team knows who’s responsible, and they can trust the data they use,” Karale adds.

A hard look in the organizational mirror

AI has a way of amplifying whatever patterns it finds in the data — the helpful ones, but also the old biases organizations would rather leave behind. Avoiding that trap starts with recognizing that bias is often a structural issue.

CIOs can do a couple of things to prevent problems from taking root. “Vet all data used for training or pilot runs and confirm foundational controls are in place before AI enters the workflow,” says Underkoffler.

Also, try to understand in detail how agentic AI changes the risk model. “These systems introduce new forms of autonomy, dependency, and interaction,” she says. “Controls must evolve accordingly.”

Underkoffler also adds that strong governance frameworks can guide organizations on monitoring, managing risks, and setting guardrails. These frameworks outline who’s responsible for overseeing AI systems, how decisions are documented, and when human judgment must step in, providing structure in an environment where the technology is evolving faster than most policies can keep up.

And Karale says that fairness metrics, such as disparate impact, play an important role in that oversight. These measures help teams understand whether an AI system is treating different groups equitably or unintentionally favoring one over another. These metrics could be incorporated into the model validation pipeline.

Domain experts can also play a key role in spotting and retraining models that produce biased or off-target outputs. They understand the context behind the data, so they’re often the first to notice when something doesn’t look right. “Continuous learning is just as important for machines as it is for people,” says Karale.

Amnesty International’s Smith agrees, saying organizations need to train their people continuously to help them pick out potential biases. “Raise awareness of risks and harms,” he says. “The first line of defense or risk mitigation is human.”


Read More from This Article: Get data, and the data culture, ready for AI
Source: News

Category: NewsDecember 8, 2025
Tags: art

Post navigation

PreviousPrevious post:CIOs shift from ‘cloud-first’ to ‘cloud-smart’NextNext post:SAS, 2026년 AI 산업을 이끌 8가지 전망 공개···책임성·ROI 중요성 커져

Related posts

샤오미, MIT 라이선스 ‘미모 V2.5’ 공개···장시간 실행 AI 에이전트 시장 겨냥
April 29, 2026
SAS makes AI governance the centerpiece of its agent strategy
April 29, 2026
The boardroom divide: Why cyber resilience is a cultural asset
April 28, 2026
Samsung Galaxy AI for business: Productivity meets security
April 28, 2026
Startup tackles knowledge graphs to improve AI accuracy
April 28, 2026
AI won’t fix your data problems. Data engineering will
April 28, 2026
Recent Posts
  • 샤오미, MIT 라이선스 ‘미모 V2.5’ 공개···장시간 실행 AI 에이전트 시장 겨냥
  • SAS makes AI governance the centerpiece of its agent strategy
  • The boardroom divide: Why cyber resilience is a cultural asset
  • Samsung Galaxy AI for business: Productivity meets security
  • Startup tackles knowledge graphs to improve AI accuracy
Recent Comments
    Archives
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.