It is easy to think of AI as nothing more than a technology race. IT and business leaders have seen the impact of generative AI and are preparing for agentic AI to take control of complex workflows, transforming how entire businesses and industries operate.
But that ignores the fact that developing, implementing and running an AI strategy involves a series of decisions which must be made by humans. If that decision- making process is flawed, and trust is lost, it might be impossible to get back on track.
The stakes are high. IDC research shows that by 2030, 45% of organisations will orchestrate AI agents at scale.
But there will be bumps in the road long before then. By 2030, a fifth of G1000 organisations will have experienced significant disruption, including lawsuits, substantial fines and CIO dismissals, due to inadequate controls and governance of AI agents.
In the meantime, IDC warns that companies face a big hit to their productivity, if they do not prioritise high quality AI-ready data by 2027.
That means 2026 is critical. This will be year that businesses make decisions that will affect their chances of extracting value from new AI technologies in the years ahead.
So how should technology leaders work through this? And who can they look to for help?
As Mat Franklin, VP & managing partner at Fujitsu’s consulting business Uvance Wayfinders, Oceania, explains in a CIO webcast, technology leaders should remember that a business is a human endeavour.
And humans need to be able to trust the AI systems they’re relying on to help them realise value.
“The challenge is really about understanding how decisions are made, and that’s a fundamentally human problem,” Franklin says.
The right technological foundations are, of course, critical if companies are to benefit from AI, adds Ashok Govindaraju, VP and partner, Uvance Wayfinders Consulting, Oceania.
But he says: “At the heart of everything driven by decision-making powered by AI, you have the partnership between AI and human beings.”
Supporting that partnership means being clear where the handoff points between humans and machines should be.
“Who are the right stakeholders to be making those decisions? Who has approval rights? What levels of approval rights do AI systems have, is going to be important. That’s number one,” adds Govindaraju.
The right insights
Once they have resolved these big questions, businesses are able to fully exploit AI’s ability to look at millions of data points and historical precedents and surface the right insights.
And this is where Uvance Wayfinders and its parent Fujitsu deploy research-backed, patent pending technologies to fill gaps in AI judgement and prevent hallucinations.
Ultimately, says Govindaraju, “70% of what happens in an organisation can easily be supported by decisions from systems and applications.”
The other 30% requires human judgement, but that doesn’t happen in isolation. Designing workflows where technology can support these is critical, and understanding the provenance of those decisions is essential.
Ultimately human decision-making hasn’t changed, Franklin argues. The question is how companies can make good decisions based on human and AI inputs.
As Franklin puts it: “I don’t think there are AI opportunities or problems. I think there are business opportunities or problems.”
Watch the other video discussions in this series.
Read More from This Article: How to retain trust in the race for AI-driven growth
Source: News

