Enthusiasm for AI may be running high, but Australian enterprises remain cautious about putting all their chips on the table. The Reserve Bank of Australia recently found that enterprise-wide AI transformation remains the exception rather than the norm.
It’s not due to appetite. It has everything to do with pragmatism.
Australian organisations have been channelling technology investment into cybersecurity, compliance and legacy system upgrades. With persistent skills shortages and legitimate concerns around trust and safety, many feel they’re being left behind.
Aram Lauxtermann, Head of Market Strategy and GTM at Datacom, argues this caution is well-founded. “I think it’s very good for a lot of the large organisations and just departments in general, to be very cautious about AI implementations,” he said.
“What we still see is that a large amount of AI pilots fail, and around 30 per cent of the organisations we’re speaking with do not have a clear AI strategy that defines where they want to go and what they want to become.”
Market noise isn’t helping enterprises land an AI strategy, either. Vendors are rebranding existing products as AI, making it challenging to identify genuine value once they start digging into the solutions.
“It’s becoming harder and harder for a lot of the leaders to actually identify what is real value and what is the value that we can get out of artificial intelligence implementations,” Lauxtermann said.
A framework for responsible AI
For organisations operating in highly regulated sectors such as government, healthcare and education, a clear methodology is particularly important. Datacom’s approach in supporting these enterprises focuses on answering four questions: What is your AI vision – what do you want to become? What is your framework – how do you achieve it? How do you execute responsibly? And how do you realise your vision?
“A lot of companies start with focusing only on a couple of use cases, and then it can become very disjointed because there’s no overarching strategy that pulls it all together,” Lauxtermann said.
Datacom has applied AI to its own operations first, creating AI agents that write code, test systems and conduct analysis alongside human developers, testers and business analysts. This approach now extends to client work, particularly around legacy system modernisation.
The scale of the legacy problem is significant. Lauxtermann notes that 60 to 80 per cent of IT budgets often go to maintaining old systems. Around 70 per cent of modernisation programmes fail, and nearly half of security exploits target these aging platforms.
“A lot of these systems were very poorly documented because they’re pretty old and very often the documentation was not up to date,” he said.
> allowfullscreen>
The methodology pairs BA agents with human oversight to analyse entire systems, then deploys development agents alongside project managers to modernise them. Because the approach delivers a like-for-like migration first, testers can compare both systems to verify everything works correctly.
“We’ve gotten so mature with this methodology that, for all our clients, we will commercially fully de-risk this with fixed fee, fixed outcome contracts,” Lauxtermann said.
From modernisation to operations
The same agent methodology now extends beyond modernisation into ongoing operations. An AI app assurance approach means agents can detect when something breaks, direct a developer to fix the code, write reports and hand over to humans for validation.
Lauxtermann described a government agency that had previously hired five business analysts who took ten months to produce 30,000 pages of documentation for a legacy system. The result was hard to understand and time-consuming to create.
The next time, the agency engaged Datacom.
“We provisioned a squad of some real people, BAs, surrounded by multiple BA AI agents. These BA agents produced a similar amount of documentation with the same level of detail, but they did that in three weeks instead of ten months,” Lauxtermann said. “Also, they were able to synthesise and highlight the key things to focus on and create all the diagrams and graphs to actually make it understandable, actionable and consumable by humans.”
Another state government body faced security challenges with limited mobile device capabilities and login issues. The same methodology analysed, converted and tested their entire legacy estate.
“We were able to resolve most of their security challenges. We were able to, after that, validate that they’re not at risk, that the system is working correctly, and all within a period of around a third of the time that it would have taken if humans had done it,” Lauxtermann said.
Australian organisations might currently be weighing speed, security, and productivity when considering all-of-business AI adoption, but they don’t need to. With the right structured approach, supported by a strategic partner, augmenting human expertise with AI agents can be achieved while minimising risk.
See how Datacom helps organisations modernise with AI, without increasing risk.
Read More from This Article: To adopt AI quickly, or safely? Datacom says you don’t have to choose one over the other
Source: News

