Real problem enterprises are facing
Enterprises are currently in an all-hands-on-deck situation to stay relevant in the evolving AI race, with initiatives from the executives to invest heavily in the application layer of AI in pursuance to find productivity and performance gains with the advancement of large language models and generative AI. This can be particularly witnessed in the world of analytics, which is moving away from traditional query writing, Excel and dash-boarding to more conversational and chat-based analytics, offering a brand-new way for business and analysts to understand more about their data to support faster decision-making and product iterations.
But despite the push towards enhancement of AI infrastructure within the organizations, and initiatives to support AI adoption, a few things still haven’t changed much – the reliance on data accuracy and consistency. A few weeks ago, I was in a quarterly business review meeting where executives were still asking why the numbers didn’t reconcile to the dollar across product reporting and financials. “Why does the number methodology differ across geographies? This doesn’t seem to pass the smoke test! “
In many instances, I’ve seen advancement of AI initiatives hitting a headwind, not because the infrastructure didn’t support it or they don’t have access to the latest models, but because of leaders not having confidence in the underlying data. When leaders don’t get the same consistent answer to the same question asked, regardless of region or reporting or data access, the confidence erodes quickly.
And the interesting part is, AI didn’t create this problem; it rather exposed it. The issue was more at the foundation level related to the scheme of metrics governance.
AI didn’t break — metrics did
AI systems work very well with number crunching and reasoning of the data, as long as it’s provided along with clear guidelines for performing a task. The models are getting even more powerful with time, but they still rely heavily on the user to guide them with the right set of instructions to generate the desired outcome that includes data analysis, or finding anomalies in data or generating a chart or reasoning why a metric may be behaving a certain way.
In many large organizations, especially those that are scattered across geographies, the metrics can be both defined and interpreted slightly differently. For instance, net revenue may be defined at the time of revenue recognition by finance, while products may see it in a different way and marketing might have a slightly different definition. Analysts who are working with this data in each department may be well versed with how their senior leaders perceive this information through tribal knowledge and shared internal documentation, whereas AI doesn’t quite understand this difference in nuance, has no idea of what caveats to apply, and hence, the results don’t seem to reconcile.
So, when AI is asked about net revenue numbers for the previous quarter by an executive, it doesn’t have that sound understanding of which exact definition to supply, what assumptions to keep in mind and what to exclude. It just reasons on what raw data exists. The answer may be numerically correct based on one team’s definition, but inconsistent across reporting.
This is why AI adoption is falling short at the executive layer. While analysts can still find value in assistance with query writing and data analysis, leaders tend to lose confidence soon if results are inconsistent.
Why AI governance alone doesn’t solve this
While this may not be a brand-new problem and organizations have seen this issue in some shape or form, the first inclination might seem to double down on AI governance to tackle this issue. Organizations might create a council governing policies around maintaining data privacy, removal of bias, model approvals, data audit and adding guardrails to prevent hallucinations. No doubt these are vital to prevent AI misuse, but these don’t address the root cause of the problem.
AI governance mainly focuses on the behavior of AI systems and governance around what data can be accepted and by whom. They don’t necessarily focus on what data actually means. This is a major gap that becomes more discoverable when AI systems are deployed on top of these inconsistent metric definitions. Even though you have the best high-performing models layered on top of the best-in-class infrastructure and self-governance policies, you can still have unreliable results.
Ideally, the regulation of AI behavior should come after the semantic metric definitions and this sequencing makes all the difference that organizations need to realize sooner.
What metric governance actually addresses
Metric governance isn’t associated with slowing down decision-making and centralizing decisions; it’s more about defining once and using everywhere consistently. That’s how metrics become reliable and sharable business assets as compared to siloed calculations embedded in a report.
A clearly defined governed metric goes beyond a SQL query logic or an arithmetic calculation in an Excel sheet; it includes:
- A clear business definition, context about the measurable event
- Version-controlled computation logic
- Team ownership and accountability
- Rule around update through git
- Validation and reconciliation of logic
Metric governance is designed to be BI tool agnostic; it’s the fundamental operating model that stays intact regardless of the BI tool in use.
Why ignoring metric governance can cost the organization more
AI systems can crunch through terabytes of data with absolutely no hiccups, so data volume is not an issue, but these systems still struggle with data ambiguity. Without a governed semantic metrics layer, AI outputs can produce disparate outputs as the underlying query shifts. This can make the financial reporting become brittle and prone to regulatory risk when changes in definitions and deviations in numbers evolve without traceability. Analytics has to spend a non-trivial amount of time reproducing and defending the numbers rather than performing new analysis to support the growth of the business. The problem compounds with recurring data issues.
While speed in AI systems is important, it’s not fully appreciated until the results are reliable. In these instances, the model hallucinations are not entirely tied to model performance settings but rather semantic definition failures, leading the model to inconsistent answers.
Upsides of organizational metrics discipline
With metrics governance built at the foundational level supported by the infrastructure and data warehouse layer, we can immediately see AI adoption accelerating across organizations as models improve, providing reliable results leading to overall improvement in trust.
Organizations tend to see faster product iterations, faster data analysis, a reduction in reporting redundancies and fewer escalations from leadership over data inconsistencies. Analysts can spend more on complex analysis than on reconciliation rework. AI systems built around metric governance can help executives to even self-serve their data needs with conversational analytics in natural language to dive deeper into their business data to find more opportunities.
This metrics discipline can actually help uncover why there has been a misalignment in numbers across departments for years and what actually constitutes it.
Metrics become a reliable asset for a business, amplified and promoted by AI systems.
Redefining AI readiness
Organizations usually seem to associate having AI-ready infrastructure, larger compute, best-in-class models and sheer volume of data to train on as the core pillars of AI readiness. While these are important to the overall system performance, the real AI readiness at the executive level is simpler but a bit harder to achieve.
The real AI readiness appears when the leaders can have full faith in the reporting numbers that are consistent across the board & supported by a robust validation framework and they can self-serve their basic data needs without depending upon an analyst in the room. This trust is built on a solid foundation of metric governance powered by alignment in metric definition, metric ownership and clear guidelines about logic updates and ongoing maintenance. Without this key foundation, AI just adds speed to confusion.
What CIOs can do differently?
The idea here is not to identify and centralize each metric across all teams and departments; it’s mainly a call for CIOs to recognize the importance of metric governance as a central component of data infrastructure and a key component of AI’s enterprise-wide success in adoption. CIOs can play a pivotal role in:
- Recognizing and evaluating metric governance as much as AI governance
- Establishing clear ownership and accountability of key business metrics
- Ensuring AI systems have a framework of metric consumption that undergoes strict governance
- Treating trust in AI systems’ results as a north star of AI adoption
Key takeaways
AI didn’t actually break; it rather revealed the cracks in the foundation. In conclusion, enterprise-wide AI adoption isn’t scaling as fast, not because of the lack of best-in-class models and infrastructure to support these models, but because of the clear misalignment in meaning and semantics, as they have invested in infrastructure and automation.
Metric governance isn’t the headliner in the AI world, but rather a key component of the AI computation brain, as it determines the reliability in the AI decision making. Before enterprises jump ahead to reason about their businesses with AI, they must take a step back and truly find consensus on what they are truly measuring and how that ties to the organization’s success. Intelligence without shared meaning isn’t truly intelligence; it rather leans towards quicker disagreements.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
Read More from This Article: Why senior management loses confidence in AI before it reaches scale
Source: News

