AI usage is proliferating at pace throughout the global economy but issues around trust are stifling success, new research suggests.
Nearly every enterprise is already using AI or plans to within the next 12 months. Yet according to the SAS Data and AI Impact Report, 46% of organizations’ AI initiatives are affected by the “trust dilemma” — i.e., the gap between the perceived trust in AI systems and their actual trustworthiness.
This disconnect leads to two opposing risks, each of which prevents businesses from maximizing AI return on investment (ROI).
When trust in AI is low, employees don’t utilize the technology enough. When employees are overconfident in unverified systems, they rely on them too much.
To fully realize the value of their AI investments, organizations need to strike the perfect balance.
The risks of trusting AI too much — or not enough
Despite the relatively nascent nature of AI tools, the SAS report found that 78% of respondents have “complete trust” in the technology, though only 40% of systems show “advanced or high levels of AI trustworthiness.”
What’s more, respondents scoring low on AI trustworthiness actually trusted genAI 200% more than traditional machine learning tools. Kimberly Nevala, a strategic advisor with SAS, attributed this to the conversational nature of the technology, and the fact that users can prompt it, read the responses, and then redirect it as they see fit.
“There is a sense that you have a greater degree of agency and control in this process than perhaps we really do based on how the systems work,” Nevala said in a recent CIO webcast. “They’re also designed to always answer, and they are always confident collaborators. It’s a subtle and seductive thing.”
The more users trust AI tools, the more they utilize them, Nevala continued.
“And this is a problem, because if we have too much trust, we are likely to over-rely on it,” she said. “So, we are inviting not only potentially large errors but also increasing the risk exposure of our organizations.
“On the other hand, when employees don’t trust AI enough, they tend to under-rely on the technology — which leaves value and “really sustained outcomes on the table,” Nevala said. “And so, addressing this trust dilemma and bringing [trust and trustworthiness] into balance is really important.”
How to enable trustworthy AI
Maximizing AI ROI is only possible when organizations have a high degree of confidence that the tools are going to work how they’re intended to work. To do this, organizations need to put guardrails into AI-driven processes and train their teams to know when to use AI systems and when to avoid them.
Gretchen Stewart, AI solution architect at Intel, highlighted the importance of project communication. By providing information on areas such as risk mitigation and results, people realize that “the integrity of the system is built into it”, she added.
“Developing a trustworthy AI system and developing trust in AI is a process,” Nevala added. “It happens through a series of decisions that happen from the start to the end of the AI lifecycle and into deployment and beyond.”
As AI initiatives roll out, such decisions involve establishing business boundaries, defining security and privacy requirements, deciding which models and tools to allow and disallow, and choosing which processes need humans in the loop.
Building trustworthy AI is an ongoing discipline. The organizations that get it right will be the ones to realize AI’s full ROI.
To learn more about solving the AI trust dilemma and unleashing AI ROI, watch the webcast.
Read More from This Article: Why trust is the key to delivering business results with AI
Source: News

