Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

AI FOMO: When AI Is the wrong answer to the right problem

Most AI project failures I have seen do not announce themselves cleanly. There is rarely a moment where someone stands up and admits to making the wrong call. Instead, the project quietly underdelivers. The team makes constant adjustments; leadership loses confidence and eventually the whole thing is filed away under “we tried AI and it did not work out.” This happens without anyone doing a real accounting of what the decision actually cost.

I was close to one of those situations not long ago. An organization had a system built around county-level values that drove a core business process. Over time, those values had drifted and the outputs were degrading in ways that affected the bottom line. The path forward was not complicated: A targeted update to the underlying values and some lightweight tooling to detect drift going forward. It would have been a few weeks of focused work at a modest cost with high confidence in the outcome.

What happened instead was that the organization decided to rebuild the system entirely using a non-deterministic AI model. This is worth pausing on because the original problem was deterministic by nature. It had known inputs, predictable logic and a correct answer that did not change based on inference or probability. Reaching for a non-deterministic solution in that context was not a technology decision; it was a category error. I understand why it was made. AI was consuming every boardroom conversation at the time and there was real pressure to be seen doing something proportionate to the moment.

The new system appeared to correct the original problem for a while, and it looked like the right call. Then the drift returned, worse than before, and the expense they had been trying to eliminate returned at a scale that dwarfed the original issue. The organization had applied the wrong class of solution to a well-defined problem, and nobody in the room had stopped to ask whether that mattered.

The capital allocation problem

This is not an isolated story. Harrison Allen Lewis, a three-time CIO, recently published a piece that puts a number on the broader pattern. He argues that in most enterprises, somewhere between 15–25 percent of technology spend is tied up in redundant systems that deliver no material business value. This trend is mirrored in recent Deloitte research on the “AI ROI paradox“: While 85 percent of organizations increased their AI spend in 2025, the average payback period for these investments has stretched to nearly four years. This is a significant departure from the traditional seven to 12-month window for enterprise technology. These are not technology failures; they are capital allocation problems.

What sits underneath that number is AI FOMO. The fear of being the organization that did not move fast enough is real and sometimes legitimate. But FOMO is a particularly dangerous input to a capital allocation decision because it optimizes for the appearance of action rather than the quality of the outcome. It pushes organizations toward the sophisticated answer when the precise one would have been faster, cheaper and more durable.

The result is spend that accumulates without a clear line back to value. Boston Consulting Group recently found that while 88 percent of organizations have begun AI pilots, only 5 percent have managed to reap substantial financial gains. The remaining 60 percent are failing to achieve any material value at all despite substantial investment. The antidote is discipline around how AI investments are evaluated, governed and killed when the evidence stops supporting them. That discipline has to start before the build decision, not after the drift sets in.

The pre-build diagnostic

Before an organization reaches for a governance framework, there is a more fundamental question that rarely gets the attention it deserves: Is this actually a problem AI is suited to solve, and does this organization have what it takes to support the solution over time? I have watched that question get skipped more times than I can count. The investment thesis gets built around what the model can do in a demo environment, and by the time the fit between the model and the actual problem becomes clear, the budget is already committed and the team is already building.

There are three things worth examining honestly before that happens. The first is whether the model can genuinely do the job at the scale and accuracy the business actually requires. Accuracy thresholds sound like a technical detail but they carry real financial weight. If the business needs 98 percent accuracy and the model reliably delivers 85, the human review layer required to catch and correct the gap will often cost more than the manual process the AI was supposed to replace.

Inference cost compounds that further. The true cost of an AI output includes not just tokens and compute but the ongoing engineering attention the system requires to stay functional. That number has to be meaningfully lower than human labor at production volume, not just at pilot scale. The scalability question is the one most sandboxes never answer honestly. A model that performs well on clean, bounded data in a controlled environment will frequently encounter the edge cases of real-world production and behave very differently.

Whether the organization can actually support what it is proposing to build is the second and often the most uncomfortable set of questions. Data ownership sits at the center of it. A project that depends on a third-party data stream the organization does not control, or on data that lacks the cleanliness the model requires to perform, is carrying a foundational risk that no amount of engineering will resolve.

Integration complexity belongs in the same conversation. A high-performing model that cannot connect to existing systems without a custom middleware project that costs more than the value being generated is not a solution; it is a different problem. And the internal talent required to keep the system from drifting over time is the dimension that gets the least scrutiny during approval and the most attention eighteen months later when something starts to go wrong and nobody knows how to respond.

The third area is whether the business will actually accept and sustain the outcome, which is a different question from whether the technology works. In regulated industries, any model that cannot produce a clear audit trail for its decisions should not survive an early review, regardless of its performance metrics. Time to measurable signal matters because a project that cannot demonstrate proof of value within ninety days is asking for extended runway without evidence. That is how pilots quietly become permanent operational commitments.

Whether the capability is genuinely defensible is worth asking early. Spending significant capital to build something a competitor can replicate with the same off-the-shelf API, and a week of engineering time is not innovation; it is an expensive way to achieve parity. And the people who are supposed to use the output have to actually trust it. A model that performs well technically but that underwriters, analysts or customers refuse to rely on has failed regardless of what the benchmark numbers say.

Working through these questions before the build decision gets made does not eliminate risk. But it shifts the conversation from what we could build to whether we are actually set up to build it well and sustain it honestly.

Governance proportional to risk

Assuming the diagnostic holds up and the case for building is genuine, the next question is what kind of governance the investment actually needs. Most organizations default to a single approach regardless of what they are building. That default is its own category of mistake. A speculative revenue experiment and a core operational system are not the same kind of bet. Treating them with the same oversight model will either strangle the experiment with bureaucracy or expose the core system to risk it was never designed to absorb.

The situation should determine the framework, not the other way around.

When an organization is exploring genuinely new territory, such as testing an AI-driven revenue stream or a product capability that has no internal precedent, the governance model needs to be tight at the front and earn its way to freedom. Room without gates is how speculative projects consume eighteen months of runway without producing anything the business can point to. What works better is a short initial window to prove the basic math, a defined accuracy threshold that has to be cleared before real-world data enters the picture, and a clear escalation path from shadow environment to full integration. Each stage gets more autonomy because each stage has earned it.

When the goal is modernizing internal operations, the governance question shifts. The risk profile is different because the organization is not exploring unknown territory; it is trying to do something it already does, but more efficiently. In these situations, the burden of proof moves away from accuracy and toward data. A model being trained on proprietary internal data to automate a known workflow is only as good as the data it runs on. Tight monitoring on error rates early, a clear standard for data sovereignty before any custom model work begins, and meaningful gates around the removal of manual steps are essential. The leeway expands as the evidence of process improvement accumulates, not before.

When the primary concern is margin protection on high-volume transactions, the economics have to be the governing logic from the start. The question is not whether AI can perform the task but whether the cost of AI performing the task stays below the cost of human labor at the volume the business actually runs. That calculation needs to be established as a baseline before build begins and monitored continuously afterward. Inference costs do not always scale linearly. A model that is economically viable at pilot volume can become a hidden tax on every transaction at production volume. The governance here is financial rather than technical. If the margin math stops working, the project stops regardless of how technically impressive the solution is.

The most complex governance situation is the one where an organization needs to manage immediate operational pressure and longer-term strategic bets at the same time. The temptation is to treat everything with the same urgency, which often means that immediate fixes consume the bandwidth that strategic work requires. Separating these explicitly, with different oversight cadences, different capital thresholds and different definitions of success for each horizon, is what allows an organization to fix what is broken today without sacrificing the position it is trying to build for the future.

Final perspective

There is a version of this conversation that treats AI governance as a compliance exercise: A set of controls designed to slow things down and protect the organization from its own enthusiasm. These frameworks are not brakes. They are the difference between capital that compounds and capital that quietly drains away while everyone is focused on the technology.

The organizations that navigate this well share a few things in common that have nothing to do with the sophistication of their models or the size of their AI budgets. They have technology leaders who are willing to kill a project when the evidence stops supporting it. This sounds obvious but is genuinely rare when a team has been building for six months and the sunk cost is visible. They have CFOs and boards who understand that a well-governed AI portfolio will have failures in it, and that those failures are not evidence of a broken process but evidence that the process is working.

The organization I described at the beginning of this piece did not fail because they chose the wrong AI approach. They failed because they chose AI for a problem that did not require it. That was a governance error that happened before a single line of code was written. Getting the category right matters more than getting the model right.

Knowing which kind of problem you have before you decide which kind of solution to reach for and then governing the investment in proportion to what you actually know, is what separates organizations building an advantage that holds from the ones already filing an AI post-mortem under things that did not work out.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?


Read More from This Article: AI FOMO: When AI Is the wrong answer to the right problem
Source: News

Category: NewsMay 6, 2026
Tags: art

Post navigation

PreviousPrevious post:I gave our developers an AI coding assistant. The security team nearly mutiniedNextNext post:OpenAI, Anthropic expand services push, signaling new phase in enterprise AI race

Related posts

I gave our developers an AI coding assistant. The security team nearly mutinied
May 6, 2026
OpenAI, Anthropic expand services push, signaling new phase in enterprise AI race
May 6, 2026
The AI assessment gap: Why your hiring process can’t find the talent you need
May 6, 2026
How UKG puts AI to work for frontline employees
May 6, 2026
AI is spreading decision-making, but not accountability
May 6, 2026
The AI economy needs a new vocabulary
May 6, 2026
Recent Posts
  • I gave our developers an AI coding assistant. The security team nearly mutinied
  • AI FOMO: When AI Is the wrong answer to the right problem
  • OpenAI, Anthropic expand services push, signaling new phase in enterprise AI race
  • AI is spreading decision-making, but not accountability
  • How UKG puts AI to work for frontline employees
Recent Comments
    Archives
    • May 2026
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.