Market research from KPMG has found many Kiwis don’t trust artificial intelligence technology, even some who use it regularly.
The research, released today, found 69% of New Zealanders use AI regularly. 31% reported they couldn’t complete their work without the help of AI and 43% said they were concerned about being left behind if they did not use it.
However, only 34% were willing to trust it and 44% believed the risks of AI outweighed its benefits.
“Alongside other advanced economy countries, New Zealand is lagging in AI training and literacy,” said KPMG New Zealand chief digital officer Cowan Pettigrew.
“It’s important for us a nation and as business owners that we invest in training to assist in removing the misunderstanding of AI, increase effective usage and allow the identification of opportunities for AI to play a role.”
Many workers reported the benefits of using AI at work. 43% of those surveyed reported increased efficiency, quality of work and innovation, and 31% reported increased revenue-generating activity.
However, the report also showed some use of AI at work was creating complex risks for organisations. For example, 51% of workers reported that they do not check on the accuracy of AI output before using it for work.
Employers could also be increasing that risk through inaction; only 25% of respondents reported that their work had a policy for generative AI use.
The report, “Trust, attitudes and use of Artificial Intelligence: A global study 2025”, was led by the University of Melbourne in collaboration with KPMG and surveyed more than 48,000 people across 47 countries.
It also found that 81% of New Zealanders surveyed believed AI regulation was needed and 85% were more willing to trust AI systems when they have assurance of their trustworthy use, such as knowing who would be accountable if something went wrong.
“What this tells us is that practical governance is key,” Pettigrew said. “New Zealanders are eager to get going with AI, but they want to do this in a regulated environment with assurance that the systems they are using can be trusted.”
Clear guardrails and tools for people to upskill in a supported way, including learning how to critically assess if what they’re getting from AI is accurate and reliable, were required.
“This includes everyone at every point in the AI ecosystem,” Pettigrew said.
KPMG considered itself to be “client zero”, taking its own advice, applying its frameworks and services to its own business and sharing insights from that with clients.
KPMG also offered a trusted AI framework for designing, building and using AI in a responsible and ethical manner.
However, people had to be given the freedom “within reason”, to experiment with AI to learn how to use it critically, Pettigrew said.
“Having guardrails in place allows our people to give AI tools a go safe in the knowledge that our data is protected.”
Read More from This Article: Kiwis embrace AI at work, but trust and governance pose challenges – KPMG report
Source: News