Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

Data management trends: What’s in, what’s out

Data management is one of the most important functions of IT. It helps ensure the organization’s data is accurate, coherent, secure, and accessible to the users who need it, and can enhance decision-making, efficiency, and compliance with data privacy regulations.

Indeed, without data management, enterprises that strive to be digital businesses might lack a reliable foundation for success. 

That said, like most areas of business, data management strategies, techniques, and technologies are in constant flux. Here’s a look the current trends in data management, including what’s in and what’s out.

In: Real-time data mastering to support ongoing operations

Data provided to users needs to be fresh to be useful. This is especially true in sectors subject to rapid change.

“When we’re matching clinicians to open roles, data can’t be stale,” says Taner Maia, senior product manager at CHG Healthcare, a provider of healthcare staffing services. “Provider licenses change, credentials get updated, and availability shifts constantly. We’re seeing a big move toward real-time architectures that surface the most current data to the teams who need it.”

Before CHG modernized its approach to data management, different divisions maintained their own systems, and staff often had to ask providers for the same information multiple times. “Records were inconsistent, and identifying duplicates across systems was a tedious, manual process,” Maia says.

Today, real-time access to unified provider data, through a solution from Tamr, has reduced duplicate record creation, improved the sourcing of new providers, and made it easier for teams to see the full context of a provider before engaging with them, Maia says. “It’s become a requirement for how we operate and helps us match the best provider to each client’s needs,” he says.

In: Data as a product

Data as a product is an approach where enterprises treat data as a shareable, valuable product with clear ownership and documentation. Rather than being considered as raw data, it’s looked at as a reusable, accessible asset that can solve business problems.

“Treating data as a product puts everybody in the workflow, working within the same ecosystem, and it’s one of the key things if you want to scale analytics,” says Roman Rylko, CTO at software development and consulting firm Pynest.

“Our teams were used to maintaining separate copies for research and production, but it gets you trapped really soon,” Rylko says. “Then you’re running around trying to consolidate warehouses and feature stores, hoping you get in time to get the software out. The duplication cost us money, so for most of the projects we just moved to a single, governed lakehouse with a unified catalog.”

Now users pull whatever data they need, in whatever format, and can share it across teams. “Now we have less fires to put out; our analysts move faster,” Rylko says.

In: Data quality as a strategic asset

Enterprises continue to gather enormous volumes of data, and oftentimes some of this data is inaccurate, out of date, duplicate, inconsistent, or irrelevant. This data can lead to poor decision-making, inferior customer service and support, and even lost revenue.

While companies have been using tools such as data cleansing for years, they might not have treated data quality as a strategic asset that must be constantly maintained.

“Bad data undermines everything,” says William McKnight, president of McKnight Consulting Group. “AI and analytics only work well if the underlying data is accurate, relevant, and well-curated. Targeted efforts on high-impact datasets yield faster, more reliable results.”

Enterprises need to focus on cleansing, validating, enriching, and monitoring critical datasets — small, iterative improvements rather than trying to fix everything at once, McKnight says.

In: Data lakehouses

The data lakehouse, a data architecture that combines the flexibility and efficiency of a data lake with the management and performance

of a data warehouse, is on the rise. Enterprises can use data lakehouses to store and analyze various types of data, including structured, semi-structured, and unstructured data.

The global data lakehouse market was estimated at $11.35 billion in 2024 and is projected to reach $74.0 billion by 2033, according to a report from Grand View Research. The market’s growth has been driven by rising demand for unified data platforms that combine scalability with structure and performance to support advanced analytics and AI workloads, the report says.

“Data approaches that make the picture simpler and more transparent are currently winning,” Pynest’s Rylko says. “Instead of multiple disparate data warehouses, companies are moving to a clear lakehouse architecture. Teams have clear data contracts, linear catalogs, and automated quality checks and monitoring.”

In: Governance that supports AI use cases

Data governance has been “in” for years. But in addition to ensuring data quality, security, privacy, and other data-related essentials, governance now takes on the role of ensuring AI outputs can be trusted.

“Good governance is no longer just about compliance — it’s about enabling AI to generate trustworthy insights,” McKnight says. “Structured data, clear ownership, and transparent lineage build confidence in results.”

Technology leaders need to focus on metadata management, data stewardship, lineage tracking, and clearly defined roles for managing AI-ready data, McKnight says.

Leading enterprises are implementing structured knowledge frameworks that encode business rules, product relationships, and compliance requirements into a semantic layer, says Sanjeev Mohan, principal at advisory firm SanjMo. “These guardrails enable autonomous operation while preventing costly errors,” he says. “CIOs who adopt this will report fewer AI mistakes requiring human intervention.”

Out: Mass AI deployment without prioritization

There are any number of reasons why it’s not a good idea to deploy AI broadly and quickly within an organization.

For one thing, a mass rollout of AI without careful thought can result in significant ethical and practical concerns, including the greater likelihood of unintentional bias and discrimination; the risk of taking humans out of decision-making too soon in the process; and operational failures.

There are also data security and privacy risks, and the lack of sufficient skills to achieve intended goals with AI. And of course, there’s the problem of using AI where it isn’t really needed or doesn’t belong. All of this can result in lots of projects that don’t deliver value or fail completely.

“Flooding AI with all enterprise data wastes resources and reduces trust,” McKnight says. “Selective, high-quality datasets produce better outcomes.”

Out: Rigid, monolithic platforms

The ability to adapt quickly to change is essential in today’s data management environment. The rise of AI has made agility an even more important trait than in the past.

“Stacks that can’t adapt quickly to new AI models and frameworks become obsolete fast,” McKnight says. “Flexibility is essential. AI models and tools evolve rapidly. Data platforms must be able to plug into multiple AI frameworks without being locked into one vendor or rigid architecture.”

Centralized data warehouses driven by the goal to consolidate data “are no longer a dominant trend, being replaced by more hybrid, platform-oriented approaches — such as data fabrics, lakehouses, and edge processing — that have a clear connection to [return on investment] and business use cases,” says Orla Daly, CIO at Skillsoft, a provider of technology training services and products.

“At an organizational level, this is driving a move towards hybrid operating models with some key responsibilities, such as governance, remaining centralized,” Daly says. “This shift supports real-time analytics and fit-for-use architecture to support AI-driven workloads, while maintaining governance.”

Out: Late data cleanups

Rather than handling data quality issues late in the processes, enterprises are modernizing their data strategies and shifting toward performing data cleansing earlier to increase efficiency.

“Before we modernized our approach, a lot of our data quality work happened late in the process, after records had already been created and used across different teams,” CHG’s Maia says. “With fragmented systems and inconsistent provider information, issues often had to be fixed manually and usually by the same people.” This created delays and extra steps for teams that were trying to move quickly.

“What we’ve seen is that this kind of after-the-fact cleanup doesn’t scale,” Maia says. “You get better outcomes when data issues can be caught earlier and resolved closer to where the data is created or used.”

What might help is AI-powered data quality monitoring, which is on the rise. “Poor data quality used to slow projects,” says Kelly Raskovich, senior manager and lead within Deloitte’s Office of the CTO. “Now it compounds through AI systems, creating cascading errors.”

As agents work, they generate data about their decisions and outcomes, Raskovich says. “This ‘digital exhaust’ from the silicon workforce becomes valuable for continuous improvement,” she says. “Instead of quarterly audits, organizations are using AI to monitor data quality and capture these insights in real-time.”

Out: DIY master data management

It might be tempting for enterprises to look for savings by going the do-it-yourself (DIY) route for master data management. But this can lead to problems down the road.

“We previously tried building our own data mastering solution at CHG,” Maia says. “It seemed reasonable at the time, but it quickly became clear that it wasn’t going to scale with the volume and complexity of our provider data. In addition, the skill sets required to maintain and evolve it were too specialized, and the operational cost was high.”

For organizations dealing with fast-changing, business-critical data, DIY approaches are becoming harder to justify, Maia says. “Modern data environments evolve too quickly for internal builds to keep pace, both in terms of scale and cost.”

Out: Pre-AI systems and practices

Many of the data management systems in place were likely deployed before AI had a major role in enterprise IT. That means it might be time for an update.

“A number of legacy practices are rapidly becoming obsolete, and enterprises that continue operating with human-first, governance-lite systems will struggle to scale AI beyond prototypes,” says Larissa Schneider, COO and co-founder of Unframe AI, which leverages large language models to create software products.

“Batch-only pipelines are simply too slow for AI-driven decisioning, which requires continuous, real-time context,” Schneider says. “Rip-and-replace modernization projects are also falling away, as enterprises no longer accept multi-year migrations. Tool sprawl without intelligence is reaching its end, with organizations consolidating around fewer platforms that embed AI directly into operations.”


Read More from This Article: Data management trends: What’s in, what’s out
Source: News

Category: NewsJanuary 29, 2026
Tags: art

Post navigation

PreviousPrevious post:Why your IT operations team is your AI adoption blueprintNextNext post:Plenty of talent, too little readiness: How AI is forcing CIOs to rethink hiring

Related posts

샤오미, MIT 라이선스 ‘미모 V2.5’ 공개···장시간 실행 AI 에이전트 시장 겨냥
April 29, 2026
SAS makes AI governance the centerpiece of its agent strategy
April 29, 2026
The boardroom divide: Why cyber resilience is a cultural asset
April 28, 2026
Samsung Galaxy AI for business: Productivity meets security
April 28, 2026
Startup tackles knowledge graphs to improve AI accuracy
April 28, 2026
AI won’t fix your data problems. Data engineering will
April 28, 2026
Recent Posts
  • 샤오미, MIT 라이선스 ‘미모 V2.5’ 공개···장시간 실행 AI 에이전트 시장 겨냥
  • SAS makes AI governance the centerpiece of its agent strategy
  • The boardroom divide: Why cyber resilience is a cultural asset
  • Samsung Galaxy AI for business: Productivity meets security
  • Startup tackles knowledge graphs to improve AI accuracy
Recent Comments
    Archives
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.