The Thinkers360 AI Trust Index examines the current state of trust in AI from both an AI end user and AI provider perspective as an annual pulse check on sentiment across various aspects of trust.
The primary research survey was conducted in Q3 2024 among Thinkers360 members and the global community of AI end users and providers, and this year’s findings show the overall AI Trust Index score for 2024 was 308 on a scale from 100 (not concerned) to 400 (extremely concerned). This is up from 224 in 2023. While AI is moving rapidly into mainstream adoption, trust in AI is still a work in progress.
Based on the survey findings, here are five recommendations for CIOs as you continue your AI implementation journey in 2025.
Use industry research…
…to understand how trust varies by scenario and industry.
The AI Trust Index found a range of trust levels based on scenario and industry. In terms of scenarios, those concerning media were of the highest concern, followed by personal, workplace and government. AI end users and providers cited cybercrime (87%), misinformation (87%), and bias (80%) as the three areas where they were very or extremely concerned. Of least concern, were employment (56%) and legal (57%).
In terms of industry, AI end users and providers cited defense and intelligence (82%), government (71%), and finance (55%) as the three areas where they were very or extremely concerned. Media and entertainment (54%) was another industry of concern related to the current use of AI. Of least concern were agriculture (13%), retail (22%), and manufacturing (28%).
CIOs can leverage this data as well as other third-party research to understand where their industry stands in regard to the level of trust in AI, and some of the key scenarios where end users have the most concern. AI’s potential vulnerability to cybercrime is clearly an area for CIOs to pay close attention to in 2025, as well as some of the lower concern areas such as employment, or use of AI for hiring decisions, which were still of concern to over 50% of those surveyed.
Pay close attention…
…to all attributes of trust in AI as they evolve.
The AI Trust Index looked at seven attributes of trust in AI as defined by NIST in their building blocks of AI trustworthiness. It found that over 65% of AI end users and providers were very concerned or extremely concerned that AI is accountable and transparent. The area with the least concern was that AI is explainable and interpretable, yet only 12% of AI end users and providers weren’t concerned. Overall, the level of concern about AI was distributed quite equally across all seven attributes of trust with over 87% somewhat concerned or higher.
The takeaway for CIOs is to ensure you address all attributes of trust in AI. The seven discussed here include: accountable and transparent, privacy-enhanced, valid and reliable, fair with harmful bias managed, safe (e.g. life, health, property), secure and resilient, and explainable and interpretable. Their relative importance may vary depending on what’s getting media attention, where you are in your AI implementation journey, and the specific use cases you’re implementing, as well as what’s most important to your end users. The key, however, is to address all attributes since you’re only as strong as the weakest link.
Implement an AI governance framework…
…that addresses all aspects of AI from AI/ML, to gen AI, to agentic AI.
In 2025, as organizations continue to embrace not only AI/ML and gen AI, but also agentic AI — or autonomous agents to automate manual tasks — it’ll be important to ensure your AI governance framework addresses all aspects of this rapidly moving field.
As agentic AI starts to permeate into core processes and enterprise workflows such as software programming, cybersecurity, ERP, CRM, BI, supply chain, retail, and other areas, the trust equation will shift from informational trust issues to transactional trust issues. The latter include ensuring appropriate levels of human oversight, accountability, transparency in decision-making, exception handling, and so on. While the no-code/low-code nature of agentic AI will streamline BP redesign efforts, it’ll be critical to spend a suitable amount of these time savings on thorough testing across all workflows and scenarios. Even though your AI is now smart enough to handle exceptions, it’ll be important to carefully test these situations as well.
Decide on AI policies…
…to align with and clearly communicate to end users, and proactively impact trust levels in your implementations.
Aligning with various national and international pacts and other forms of standards, policies, and agreements is a great way to demonstrate commitment to AI ethics to end users. For example, the EU AI Pact supports “voluntary commitments from the industry to adopt the principles of the EU AI Act before its official implementation.” Your AI governance practices can be a key differentiator, so it’s important to communicate internally as well as with customers and partners.
In addition to signing pacts and aligning with industry best practices, you can take direct action in terms of how you apply AI so there’s quantifiable positive impact on various trust attributes. “It might make sense to combine LLMs with ML models that are a lot more deterministic,” according to Anirudh Narayan, CGO at Lyzr.ai. “LLM models by themselves have some hallucinations and accuracy is lower, but you can get much cleaner, more deterministic outputs with an ML model embedded in it. This dramatically takes accuracy up from, let’s say, 67% to 95%. That dual engine power might be really helpful for CIOs to look at while building their AI infrastructure, or while considering an agent framework for building their agents.”
Ensure your program addresses the big picture…
…across AI governance, risk management, ethics, communication, and change management.
With agentic AI poised to impact so many areas of the business in 2025, CIOs will need to prepare for large changes across the organization akin to the web era and the digital transformation era. This necessitates a programmatic approach across AI governance, risk management, ethics, communication (including training and education), and change management.
“As companies scale AI and integrate agents, trust from employees and customers is paramount,” says Steve Chase, vice chair, artificial intelligence and digital innovation at KPMG. “CIOs must ensure the AI strategy and governance are well-aligned and grounded in a deep understanding of AI’s present and emerging capabilities, and supported by a modern data foundation. This grounding is essential for organizations to guide their employees through the change, and ultimately maximize AI’s value responsibly and ethically.”
CIOs should work closely with their CAIOs and other stakeholders to act as a guiding light in AI governance, ensuring that AI is used responsibly, ethically, and in alignment with your organization’s strategic goals since you provide the technical expertise, data governance framework, and risk management oversight necessary to build trust in AI and maximize its benefits.
Read More from This Article: 5 takeaways for CIOs from the Thinkers360 AI Trust Index
Source: News