A widely shared Template22 graphic on why projects fail prompted this article. I am using that chart as a prompt, not as evidence. The more useful question is not whether the familiar causes of failure are real. They are. The more useful question is why they keep repeating across programs, portfolios and enterprise transformations, even after years of investment in methods, PMOs, digital tools and AI.
The answer, in many cases, is not a lack of effort. It is a lack of decision logic. Enterprises still launch, govern and defend large initiatives without a planning discipline capable of calculating trade-offs, exposing constraints, modeling dependencies and recalculating the impact of change quickly enough to support real governance.
The pattern under the pattern
Most discussions of project failure start with visible symptoms, unclear scope, weak requirements, scope creep, poor communication, resource shortages, unrealistic deadlines, weak sponsorship and poor change control. Those symptoms matter, but when they recur at scale, they usually point to a deeper problem in the planning system itself. In PMI’s 2025 research on the strategy execution gap, PMI President and CEO Pierre Le Manh argued that AI will create value only when organizations can translate bold ideas into executed initiatives. In most enterprises, the gap is not ambition. The gap is conversion. Strategy is declared, portfolios are funded, work begins, yet leaders still cannot calculate trade-offs, expose constraints, model dependencies or replan fast enough when conditions change.
The scale of the issue is hard to dismiss. BCG’s 2024 study of large-scale technology programs found that more than two-thirds are not expected to be delivered on time, within budget and within scope, and that only 30% fully meet expectations on those three dimensions. Gartner’s 2024 survey found that only 48% of digital initiatives across the enterprise meet or exceed their business outcome targets. Those are not isolated execution misses. They are signs of systemic underperformance in how organizations prioritize, fund, sequence and govern change.
Other firms sharpen the diagnosis from different directions. McKinsey’s work on successful transformations found that among companies whose transformations failed to engage line managers and frontline employees, only 3% reported success. Bain’s David Michels argues that “red is good,” meaning organizations perform better when risk is surfaced early rather than hidden behind reassuring dashboards. Deloitte’s research on digital acceleration and strategy makes the strategic requirement explicit: Digital possibilities must shape strategy, and strategy must shape digital priorities. Put together, those findings point to one conclusion. Large programs rarely fail because a single team misses a task. They fail because the enterprise cannot see the interaction of priorities, constraints, dependencies and consequences early enough to respond intelligently.
Why this is a planning problem, not just a delivery problem
At the portfolio level, failure begins when organizations select too much work, fund the wrong work or fund the right work without a realistic view of capacity, technical debt and delivery interdependencies. BCG ties poor outcomes directly to inaccurate timeline and resource planning, weak end-to-end roadmaps and ineffective management of interdependencies. That is not simply a delivery problem. It is a portfolio design problem. Forrester’s 2025 work on operating model change adds a related warning: Fewer than half of IT leaders say their organizations prioritize operating model adaptation, leaving strategy to collide with structures that are not built to absorb change.
At the governance level, failure shows up as a value problem. Traditional oversight mechanisms can collect status, enforce templates and schedule reviews, yet still fail to answer the executive question that matters most: What happens if a key dependency slips, a budget is reduced or a shared team becomes overcommitted? Bain’s “red is good” matters here because watermelon reporting, green on the outside and red underneath, is usually a sign that governance is reporting milestones instead of modeling consequences. Gartner’s survey of Digital Vanguard organizations reinforces the point. The highest performing digital organizations do better when business and technology leaders are more aligned on execution and outcome ownership.
At the execution level, the familiar problems remain, but they look different when viewed through a planning lens. PMI’s communications research found that one out of five projects is unsuccessful due to ineffective communication, and PMI’s later analysis of communication failures linked poor communication to more than half of the projects that fail to meet business goals. The important nuance is that communication is not merely a soft skill problem. It is often a failure to express the implications of planning decisions in a form that the business can act on. An unclear scope can be a weak scenario definition. Poor requirements can reflect commitments made before constraints were visible. Scope creep is often an unmanaged consequence. Weak sponsorship often reflects weak evidence. Poor change control often means the organization can log a change but cannot calculate its ripple effects.
Why algorithmic planning is now a governance requirement
This is where the conversation needs to become more precise. Continuous scenario planning is valuable, but it only becomes decision-grade when it is supported by algorithmic planning. In large programs and portfolios, governance cannot rely on static reporting, intuition or periodic review alone. It must be able to calculate the impact of change quickly, expose hard constraints clearly and place dependencies, capacity limits, sequencing conflicts and trade-off consequences where they belong, at the center of decision-making. Without that discipline, governance is mostly a matter of interpretation. With it, governance becomes evidence-based control. That conclusion follows directly from the documented failure patterns of PMI, BCG, McKinsey, Bain, Deloitte and Gartner.
AI makes this requirement even more important. Used well, AI can be a powerful interface for senior leaders, helping them interrogate scenarios, surface anomalies, summarize risks and engage more directly with the planning environment. Used badly, it can do the opposite. If AI is not tightly coupled to mathematically sound planning data, explicit constraints, dependency logic and algorithmic calculations, it can turn supposition into false confidence. That is dangerous in portfolio and program governance, where plausible-sounding answers are not the same as decision-grade answers. The sequence matters. First, the organization needs a locked down, calculation based planning model with clear borders. Then AI can sit on top of that model as an accelerator, interpreter and executive interface. Without those boundaries, AI can easily magnify weak assumptions rather than expose them. This caution is consistent with PMI’s strategy execution framing and with EY’s 2026 CEO Outlook and Accenture’s AI reinvention thesis, both of which insist that AI must be scaled with discipline and strong foundations.
Strategic intent is inherently directional. Governance must be exacting. The bridge between the two is algorithmic planning. It is the mechanism that translates ambition into modeled consequences by testing scenarios, exposing constraints, mapping dependencies and recalculating trade-offs as conditions change. Without that bridge, governance becomes subjective. With it, leadership can distinguish between what is desirable, what is feasible and what is now at risk. That is why constraints, dependencies and capacity should not be treated as soft considerations. They are the black-and-white rules of execution.
AI is most valuable when it explains a sound planning model, not when it improvises one.
Why continuous scenario planning matters
Continuous scenario planning becomes strategically important when it gives leaders a way to compare options side by side, test trade-offs before they commit, expose bottlenecks early, map dependency cascades and continuously recalculate what changes when budgets, priorities or constraints shift. That directly addresses many of the structural drivers identified above. It does not solve every reason projects fail. It does attack a large share of the root causes beneath them.
Seen this way, many of the familiar 25 reasons collapse into a smaller set of systemic failures. An unclear scope often results in a weak scenario definition. Poor requirements are often commitments made before constraints and dependencies were visible. Scope creep is often an unmanaged consequence. Poor communication often reflects fragmented planning logic, with business, finance and delivery working from different maps. Resource shortages are often hidden by overcommitment. Weak sponsorship often reflects weak evidence. Poor change control usually means the organization can record changes but cannot model impact. At the project level, teams can sometimes survive these problems through heroic effort. At the portfolio level, heroics stop working. Constraints win. Bottlenecks win. The question is whether leadership can see them early enough to respond intelligently.
PMI’s newer M.O.R.E. framework supports this shift. PMI argues that project outcomes improve materially when organizations manage perceptions, own success, relentlessly reassess and expand perspective. Two of those ideas matter especially here. Relentlessly reassess describes a discipline of continuous adjustment as conditions shift. Managing perceptions requires communicating value and risk in ways stakeholders can act on. That is remarkably close to what mature continuous scenario planning should do at scale.
Why the urgency is rising
The pressure on CIOs is increasing, not falling. EY’s 2026 CEO Outlook says leaders are pursuing growth and adaptability through bold AI transformation, with 2026 becoming a turning point as organizations move from pilots to scaled enterprise use. Accenture makes a similar point from a different angle, arguing that organizations that build strong AI foundations will be better positioned to reinvent, compete and achieve new levels of performance. Those are reasonable claims, but they do not reduce the need for disciplined planning. Faster change increases the premium on a planning system that can calculate consequences quickly and credibly. AI can accelerate analysis, summarize scenarios and improve executive access to planning insight. It cannot replace the need to govern trade offs across budgets, capacity, architecture, timing and risk. In fact, AI is only trustworthy in this context when it is tightly coupled to mathematically sound planning data, explicit constraints, dependency logic and algorithmic calculations. Otherwise, it risks producing plausible but unsupported answers.
What CIOs should demand
For CIOs, this leads to a more useful conclusion than simply restating the 25 reasons projects fail. Large programs usually fail because the enterprise cannot see and govern the interaction of those reasons in time. A modern control system for change, therefore, needs at least six capabilities: A unified planning model across priorities, budgets and capacity; side-by-side scenario comparison; interdependency mapping; early visibility into bottlenecks; continuous recalculation as conditions shift; and executive-facing summaries that turn data into decisions. Those are the capabilities that make continuous scenario planning strategically important. The question is no longer whether planning happens. It already does. The real question is whether planning remains static, fragmented and largely narrative, or whether it becomes dynamic, scenario-based and decision-grade.
That is the real fix hidden beneath the 25 symptoms.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
Read More from This Article: Beyond the ‘25 reasons projects fail’: Why algorithmic, continuous scenario planning addresses the root causes
Source: News

